text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-shapiro_teukolsky1983-77] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Secure_Shell] | [TOKENS: 2734] |
Contents Secure Shell The Secure Shell Protocol (SSH Protocol) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution. SSH was designed for Unix-like operating systems as a replacement for Telnet and unsecured remote Unix shell protocols, such as the Berkeley Remote Shell (rsh) and the related rlogin and rexec protocols, which all use insecure, plaintext methods of authentication, such as passwords. Since mechanisms like Telnet and Remote Shell are designed to access and operate remote computers, sending the authentication tokens (e.g. username and password) for this access to these computers across a public network in an unsecured way poses a great risk of third parties obtaining the password and achieving the same level of access to the remote system as the telnet user. Secure Shell mitigates this risk through the use of encryption mechanisms that are intended to hide the contents of the transmission from an observer, even if the observer has access to the entire data stream. Finnish computer scientist Tatu Ylönen designed SSH in 1995 and provided an implementation in the form of two commands, ssh and slogin, as secure replacements for rsh and rlogin, respectively. Subsequent development of the protocol suite proceeded in several developer groups, producing several variants of implementation. The protocol specification distinguishes two major versions, referred to as SSH-1 and SSH-2. The most commonly implemented software stack is OpenSSH, released in 1999 as open-source software by the OpenBSD developers. Implementations are distributed for all types of operating systems in common use, including embedded systems. SSH applications are based on a client–server architecture, connecting an SSH client instance with an SSH server. SSH operates as a layered protocol suite comprising three principal hierarchical components: the transport layer provides server authentication, confidentiality, and integrity; the user authentication protocol validates the user to the server; and the connection protocol multiplexes the encrypted tunnel into multiple logical communication channels. Definition SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. SSH may be used in several methodologies. In the simplest manner, both ends of a communication channel use automatically generated public-private key pairs to encrypt a network connection, and then use a password to authenticate the user. When the public-private key pair is generated by the user manually, the authentication is essentially performed when the key pair is created, and a session may then be opened automatically without a password prompt. In this scenario, the public key is placed on all computers that must allow access to the owner of the matching private key, which the owner keeps private. While authentication is based on the private key, the key is never transferred through the network during authentication. SSH only verifies that the same person offering the public key also owns the matching private key. In all versions of SSH, it is important to verify unknown public keys, i.e., associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. Authentication: OpenSSH key management On Unix-like systems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file ~/.ssh/authorized_keys. This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end, and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security, the private key itself can be locked with a passphrase. The private key can also be looked for in standard places, and its full path can be specified as a command-line setting (the option -i for ssh). The ssh-keygen utility produces the public and private keys, always in pairs. Use SSH is typically used to log into a remote computer's shell or command-line interface (CLI) and to execute commands on a remote server. It also supports mechanisms for tunneling, forwarding of TCP ports and X11 connections and it can be used to transfer files using the associated SSH File Transfer Protocol (SFTP) or Secure Copy Protocol (SCP). SSH uses the client–server model. An SSH client program is typically used for establishing connections to an SSH daemon, such as sshd, accepting remote connections. Both are commonly present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to Windows 10 version 1709 do not include SSH by default, but proprietary, freeware and open source versions of various levels of complexity and completeness did and do exist (see Comparison of SSH clients). In 2018 Microsoft began porting the OpenSSH source code to Windows and in Windows 10 version 1709, an official Win32 port of OpenSSH is now available. File managers for UNIX-like systems (e.g., Konqueror) can use the FISH protocol to provide a split-pane GUI with drag-and-drop. The open source Windows program WinSCP provides similar file management (synchronization, copy, remote delete) capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Crostini on ChromeOS comes with OpenSSH by default. Setting up an SSH server in Windows typically involves enabling a feature in the Settings app. SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine. The IANA has assigned TCP port 22, UDP port 22 and SCTP port 22 for this protocol. IANA had listed the standard TCP port 22 for SSH servers as one of the well-known ports as early as 2001. SSH can also be run using SCTP rather than TCP as the connection-oriented transport layer protocol. Historical development In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology in Finland designed the first version of the protocol (now called SSH-1) prompted by a password-sniffing attack at his university network. The goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. He chose the port number 22 because it is between telnet (port 23) and ftp (port 21). Ylönen released his implementation as freeware in July 1995, and the tool quickly gained in popularity. Towards the end of 1995, the SSH user base had grown to 20,000 users in fifty countries. In December 1995, Ylönen founded SSH Communications Security to market and develop SSH. The original version of the SSH software used various pieces of free software, such as GNU libgmp, but later versions released by SSH Communications Security evolved into increasingly proprietary software. It was estimated that by 2000, the number of users had grown to 2 million. In 2006, after being discussed in a working group named "secsh", a revised version of the SSH protocol, SSH-2 was adopted as a standard. This version offers improved security and new features, but is not compatible with SSH-1. For example, it introduces new key-exchange mechanisms like Diffie–Hellman key exchange, improved data integrity checking via message authentication codes like MD5 or SHA-1, which can be negotiated between client and server. SSH-2 also adds stronger encryption methods like AES which eventually replaced weaker and compromised ciphers from the previous standard like 3DES. New features of SSH-2 include the ability to run any number of shell sessions over a single SSH connection. Due to SSH-2's superiority and popularity over SSH-1, some implementations such as libssh (v0.8.0+), Lsh and Dropbear eventually supported only the SSH-2 protocol. In January 2006, well after version 2.1 was established, RFC 4253 specified that an SSH server supporting 2.0 as well as prior versions should identify its protocol version as 1.99. This version number does not reflect a historical software revision, but a method to identify backward compatibility. In 1999, developers, desiring availability of a free software version, restarted software development from the 1.2.12 release of the original SSH program, which was the last released under an open source license. This served as a code base for Björn Grönvall's OSSH software. Shortly thereafter, OpenBSD developers forked Grönvall's code and created OpenSSH, which shipped with Release 2.6 of OpenBSD. From this version, a "portability" branch was formed to port OpenSSH to other operating systems. As of 2005[update], OpenSSH was the single most popular SSH implementation, being the default version in a large number of operating system distributions. OSSH, meanwhile, has become obsolete. OpenSSH continues to be maintained and supports the SSH-2 protocol, having expunged SSH-1 support from the codebase in the OpenSSH 7.6 release. In 2023, an alternative to traditional SSH was proposed under the name SSH3 by PhD student François Michel and Professor Olivier Bonaventure and its code has been made open source. This new version implements the original SSH Connection Protocol but operates on top of HTTP/3, which runs on QUIC. It offers multiple features such as: However, the name SSH3 is under discussion, and the project aims to rename itself to a more suitable name. The discussion stems from the fact that this new implementation significantly revises the SSH protocol, suggesting it should not be called SSH3. Uses SSH is a protocol that can be used for many applications across many platforms including most Unix variants (Linux, the BSDs including Apple's macOS, and Solaris), as well as Microsoft Windows. Some of the applications below may require features that are only available or compatible with specific SSH clients or servers. For example, using the SSH protocol to implement a VPN is possible, but presently only with the OpenSSH server and client implementation. The Secure Shell protocols are used in several file transfer mechanisms. Architecture The SSH protocol has a layered architecture with three separate components: This open architecture provides considerable flexibility, allowing the use of SSH for a variety of purposes beyond a secure shell. The functionality of the transport layer alone is comparable to Transport Layer Security (TLS); the user-authentication layer is highly extensible with custom authentication methods; and the connection layer provides the ability to multiplex many secondary sessions into a single SSH connection, a feature comparable to BEEP and not available in TLS. Algorithms Vulnerabilities In 1998, a vulnerability was described in SSH 1.5 which allowed the unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection from CRC-32 used in this version of the protocol. A fix known as SSH Compensation Attack Detector was introduced into most implementations. Many of these updated implementations contained a new integer overflow vulnerability that allowed attackers to execute arbitrary code with the privileges of the SSH daemon, typically root. In January 2001 a vulnerability was discovered that allows attackers to modify the last block of an IDEA-encrypted session. The same month, another vulnerability was discovered that allowed a malicious server to forward a client authentication to another server. Since SSH-1 has inherent design flaws that make it vulnerable, it is now generally considered obsolete and should be avoided by explicitly disabling fallback to SSH-1. Most modern servers and clients support SSH-2. In November 2008, a theoretical vulnerability was discovered for all versions of SSH which allowed recovery of up to 32 bits of plaintext from a block of ciphertext that was encrypted using what was then the standard default encryption mode, CBC. The most straightforward solution is to use CTR, counter mode, instead of CBC mode, since this renders SSH resistant to the attack. On December 28, 2014 Der Spiegel published classified information leaked by whistleblower Edward Snowden which suggests that the National Security Agency may be able to decrypt some SSH traffic. The technical details associated with such a process were not disclosed. A 2017 analysis of the CIA hacking tools BothanSpy and Gyrfalcon suggested that the SSH protocol was not compromised. A novel man-in-the-middle attack against most current ssh implementations was discovered in 2023. It was named the Terrapin attack by its discoverers. However, the risk is mitigated by the requirement to intercept a genuine ssh session, and that the attack is restricted in its scope, fortuitously resulting mostly in failed connections. The ssh developers have stated that the major impact of the attack is to degrade the keystroke timing obfuscation features of ssh. The vulnerability was fixed in OpenSSH 9.6, but requires both client and server to be upgraded for the fix to be fully effective. Standards documentation The following RFC publications by the IETF "secsh" working group document SSH-2 as a proposed Internet standard. The protocol specifications were later updated by the following publications: In addition, the OpenSSH project includes several vendor protocol specifications/extensions: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Oswald_(TV_series)] | [TOKENS: 576] |
Contents Oswald (TV series) Oswald is a preschool children's animated television series created by Dan Yaccarino and developed by Lisa Eve Huberman. The show was co-produced by HIT Entertainment (later acquired by Mattel) and Nickelodeon. The titular character is an anthropomorphic blue octopus named Oswald who lives in an apartment complex with his dachshund Weenie. An overall 26 episodes were produced. In the United States, the series premiered on Nickelodeon (as part of its Nick Jr. block) on 20 August 2001. Reruns were also broadcast on CBS (during the Nick Jr. on CBS block) and on Noggin. When the Noggin brand was relaunched as a streaming service in 2015, all 26 episodes of Oswald were made available for streaming. Prior to airing, Brown Johnson (senior vice president of Nick Jr.) said "Dan Yaccarino has created an octopus who could be a pre-schooler's best friend". Premise The series is set in Big City, a colorful world populated by anthropomorphic animals, mythological creatures and humanoid beings. Each episode follows the daily experiences of an anthropomorphic blue octopus named Oswald, accompanied by his beloved hot dog-shaped dog, Weenie, and their life in the cheerful and whimsically-designed community of Big City. Commonly, the program concentrates on Oswald's experiences with friends, acquaintances and neighbors, including Henry, a penguin, and Daisy, a flower, among others and his patient methods of coping with or tolerating different situations and dilemmas, along with his thoroughly optimistic outlook on life. Characters Episodes Release In the United States, Oswald first aired on the Nick Jr. television block on Nickelodeon on 20 August 2001. It was removed from the block on 10 October 2003, and the network on 24 May 2005. On 7 April 2003, Oswald started airing on Noggin during its daytime preschool block. Reruns also aired on the 24-hour Nick Jr. Channel upon its launch in 2009, but they were removed on 11 December 2014. The show aired briefly on Nick on CBS for one year from 22 September 2001, to 7 September 2002. When the Noggin brand was relaunched as a streaming service in 2015, all 26 episodes of Oswald were made available for streaming. The series was added to Paramount+ (at the time CBS All Access) in January 2021. Paramount Home Entertainment is the VHS and DVD distributor for the series in the United States while HIT Entertainment is the VHS and DVD distributor for the series internationally.[citation needed] Reception The series received four out of five stars on Common Sense Media. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Corona_Australis] | [TOKENS: 4050] |
Contents Corona Australis Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax. Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly. Name The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922. In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations. The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively. Characteristics Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between 17h 58.3m and 19h 19.0m , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States. Features While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming. The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, Alfecca means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii (R☉) and a mass of between 4.5 and 5 solar masses (M☉). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye. Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta1 and Eta2 CrA, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa2 is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa1 is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees. Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted. The Corona Australis Molecular Cloud is a dark molecular cloud just north of Beta Coronae Australis. Illuminated by a number of embedded reflection nebulae the cloud fans out from Epsilon Coronae Australis eastward along the constellation border with Sagittarius. It contains 7000 M☉, Herbig–Haro objects (protostars) and some very young stars, being one of the closest star-forming regions, 430 light years (130 parsecs) to the Solar System, at the surface of the Local Bubble. The first nebulae of the cloud were recorded in 1865 by Johann Friedrich Julius Schmidt. Between Epsilon and Gamma Coronae Australis the cloud consists of the particular dark nebula and star forming region Bernes 157. It is 55 by 18 arcminutes wide and possesses several stars around magnitude 13. These stars are dimmed by up to 8 magnitudes because of the obscuring dust clouds. At the center of the active star-forming region lies the Coronet cluster (also called R CrA Cluster), which is used in studying star and protoplanetary disk formation. R Coronae Australis (R CrA) is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. Other stars of the cluster include S Coronae Australis, a G-class dwarf and T Tauri star. Nearby north, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726/NGC 6727. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby. Not part of it is the globular cluster known as NGC 6723, which can be seen adjacent to the nebulosity in the neighbouring constellation of Sagittarius, but is much much further away. IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north–south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue. Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast–southwest. Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved. The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related. History Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea". In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι (Stephanoi). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος (Stephanos notios), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called Corona Sagittarii. Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as Parvum Coelum ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names Sertum Australe ("Southern Garland") and Orbiculus Capitis, while German poet and author Philippus Caesius called it Corolla ("Little Crown") or Spira Australis ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand. Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky. In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). The constellation itself was known as ti'en pieh ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include Al Ķubbah "the Tortoise", Al Ĥibā "the Tent" or Al Udḥā al Na'ām "the Ostrich Nest". It was later given the name Al Iklīl al Janūbiyyah, which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively. The ǀXam speaking San people of South Africa knew the constellation as ≠nabbe ta !nu "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire. The indigenous Boorong people of northwestern Victoria saw it as Won, a boomerang thrown by Totyarguil (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the Usiam, and Orion, called the Seg. The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Corona Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called Tanuma and in the Tuamotus, it was called Na Kaua-ki-Tonga. See also References SIMBAD External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Telnet] | [TOKENS: 1632] |
Contents Telnet Telnet (sometimes stylized TELNET) is a client-server application protocol that provides access to virtual terminals of remote systems on local area networks or the Internet. It is a protocol for bidirectional 8-bit communications. Its main goal was to connect terminal devices and terminal-oriented processes. The name "Telnet" refers to two things: a protocol specifying how two parties are to communicate and a software application that implements the protocol as a service. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP). Telnet transmits all information including usernames and passwords in plaintext so it is not recommended for security-sensitive applications such as remote management of routers. Telnet's use for this purpose has waned significantly in favor of SSH. Some extensions to Telnet which would provide encryption have been proposed. Description The telnet protocol is a client-server protocol that runs on a reliable connection-oriented transport.[citation needed] Most often, a telnet client connects over TCP to port 23 or 2323, where a Telnet server application is listening. The Telnet protocol abstracts any terminal as a Network Virtual Terminal (NVT). The client must simulate a NVT using the NVT codes when messaging the server. Telnet predated UDP/IP and originally ran over Network Control Protocol (NCP). The telnet service is best understood in the context of a user with a simple terminal using the local Telnet program (known as the client program) to run a logon session on a remote computer where the user's communications needs are handled by a Telnet server program. A Telnet service is an application providing services over the Telnet protocol. Most operating systems provide a service that can be installed or enabled to provide Telnet services to clients.[citation needed] The official specification stylizes the name as TELNET, which is not as an acronym or abbreviation. In a 1972 paper, when discussing one of the early forms of the protocol, Stephen Crocker et al. used "TELNET" explicitly as an abbreviation of "telecommunications network". In his 2015 book WHOIS Running the Internet: Protocol, Policy, and Privacy, Internet researcher Garth O. Bruen claims that Telnet was originally short for "Teletype Over Network Protocol". History Telnet was originally developed for ARPANET in 1969. Initially, it was an ad hoc protocol with no formal specification, but after extensive work in the 1970s, including numerous RFCs, it was officially formalized in RFC 854 and RFC 855, which together form Internet standard 8. Since then, many additional RFCs have updated or extended the Telnet specification, both to address issues in the original standard and to add new capabilities. Some of these extensions have also been adopted as Internet standards, particularly standards 27 through 32 (see below). Security vulnerabilities Telnet is vulnerable to network-based cyberattacks, such as packet sniffing sensitive information including passwords and fingerprinting. Telnet services can be exploited to leak information about the server (such as hostnames, IP addresses, and brand) by packet sniffing the banner. This information can then be searched to determine if a Telnet service accepts a connection without authentication. Telnet is frequently exploited by malware due to being improperly configured. Telnet is targeted by attackers more frequently than other common protocols, especially when compared to UPnP, CoAP, MQTT, AMQP, and XMPP.[citation needed] Common devices targeted are Internet of things devices, routers, and modems. The SANS Institute recommends that the use of Telnet for remote logins should be discontinued under normal circumstances for the following reasons: Extensions to Telnet provide Transport Layer Security (TLS) security and Simple Authentication and Security Layer (SASL) authentication that address the above concerns. However, most Telnet implementations do not support these extensions; and they do not address other vulnerabilities such as parsing the banner information. Telnet over VPN is a viable option if SSHv2 is not supported, or a VPN is already used to securely tunnel other application data to the remote network the Telnet server is present in. However, precautions must be taken: ideally the VPN should terminate on the Telnet server itself, unless the LAN has additional security measures against eavesdropping and modification by other devices such as additional encryption and/or VLANs. This is because Telnet traffic leaves the VPN server in its insecure plaintext form after it is decrypted. The VPN software should be a trusted one that is heavily audited (e.g. OpenVPN, WireGuard, IPsec), using preferably certificate-based/public key mutual authentication. IBM 5250 or 3270 workstation emulation is supported via custom telnet clients, TN5250/TN3270, and IBM i systems. Clients and servers designed to pass IBM 5250 data streams over Telnet generally do support SSL encryption, as SSH does not include 5250 emulation. Under IBM i (also known as OS/400), port 992 is the default port for TelnetS (Telnet over SSL/TLS). Uses Historically, Telnet provided access to a command-line interface on a remote host. However, because of serious security concerns when using Telnet over an open network such as the Internet, its use for this purpose has waned significantly in favor of SSH. The usage of Telnet for remote management has declined rapidly, especially on the public Internet, in favor of the Secure Shell (SSH) protocol. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. The Telnet protocol is mainly used for legacy equipment that does not support more modern communication mechanisms. For example, many industrial and scientific devices only have Telnet available as a communication option. Some are built with only a standard RS-232 port and use a serial server hardware appliance to provide the translation between the TCP/Telnet data and the RS-232 serial data. In such cases, SSH is not an option unless the interface appliance can be configured for SSH (or is replaced with one supporting SSH).[citation needed] Telnet support has become highly unusual in new applications, though amateur radio operators and multi-user dungeons do continue to utilize it. Security researchers estimated that 7,096,465 exposed systems on the Internet continue to use Telnet as of 2021. However, estimates of this number have varied significantly, depending on the number of ports scanned beyond the default TCP port 23. The Telnet client may be used in debugging network services such as SMTP, IRC, or HTTP servers, to issue commands to the server and examine the responses. In this case, when the Telnet client establishes a TCP connection to a port other than the standard Telnet server port, it does not use the Telnet protocol, and can be used instead to send and receive data over the TCP connection directly.[better source needed] Technical details The technical details of Telnet are defined by a variety of specifications including RFC 854. Telnet commands consist of at least two bytes. The first byte is the IAC escape character (byte 255) followed by the byte code for a given command: All data octets except 0xff are transmitted over Telnet as is. (0xff, or 255 in decimal, is the IAC byte (Interpret As Command) which signals that the next byte is a telnet command. The command to insert 0xff into the stream is 0xff, so 0xff must be escaped by doubling it when sending data over the telnet protocol.) Telnet has a variety of options that terminals implementing Telnet should support. Client applications In popular culture Star Wars: Episode IV – A New Hope from 1977 has been recreated as a text art movie served through Telnet. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Contiguous_United_States] | [TOKENS: 1792] |
Contents Contiguous United States The contiguous United States, also known as the U.S. mainland, officially referred to as the conterminous United States, consists of the 48 adjoining U.S. states and the District of Columbia of the United States in central North America. The term excludes the only two non-contiguous states and the last two to be admitted to the Union, which are Alaska and Hawaii, and all other offshore insular areas, such as the U.S. territories of American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. The colloquial term Lower 48 is also used, especially in relation to Alaska. The term The Mainland is used in Hawaii. The related but distinct term continental United States includes Alaska, which is also in North America, but separated from the 48 states by British Columbia in Canada, but excludes Hawaii and all the insular areas in the Caribbean and the Pacific. The greatest distance on a great-circle route entirely within the contiguous U.S. is 2,802 miles (4,509 km), coast-to-coast between Florida and Washington state; the greatest north–south line is 1,650 miles (2,660 km). The contiguous United States occupies an area of 3,119,884.69 square miles (8,080,464.3 km2). Of this area, 2,959,064.44 square miles (7,663,941.7 km2) is actual land, composing 83.65 percent of the country's total land area, and is comparable in size to the area of Australia. Officially, 160,820.25 square miles (416,522.5 km2) of the contiguous United States is water area, composing 62.66 percent of the nation's total water area. If just the contiguous United States were a country, it would be fifth on the list of countries and dependencies by area, behind Russia, Canada, China, and Brazil. However, the total area of the United States, including Alaska and Hawaii, ranks third or fourth. Brazil is 166,000 square miles (431,000 km2) larger than the contiguous United States, but smaller than the entire United States, including Alaska, Hawaii, and overseas territories. The 2020 U.S. census population of the area was 328,571,074, comprising 99.13 percent of the nation's total population, and a density of 111.04 inhabitants/sq mi (42.872/km2), compared to 93.844/sq mi (36.233/km2) for the nation as a whole. Other terms While conterminous U.S. has the precise meaning of contiguous U.S. (both adjectives meaning "sharing a common boundary"), other terms commonly used to describe the 48 contiguous states have a greater degree of ambiguity. Because Alaska is also a part of North America, the term continental United States also includes that state, so the term is qualified with the explicit inclusion of Alaska to resolve any ambiguity. On May 14, 1959, the United States Board on Geographic Names issued the following definitions based partially on the reference in the Alaska Omnibus Bill, which defined the continental United States as "the 49 States on the North American Continent and the District of Columbia..." The Board reaffirmed these definitions on May 13, 1999. However, even before Alaska became a state, it was properly included within the continental U.S. due to being an incorporated territory. The term mainland United States is sometimes used synonymously with continental United States, but technically refers only to those parts of states connected to the landmass of North America, thereby excluding not only Hawaii and overseas insular areas, but also islands which are part of continental states but separated from the mainland, such as the Aleutian Islands (Alaska), San Juan Islands (Washington), the Channel Islands (California), the Keys (Florida), the barrier islands (Gulf and East Coast states), and Long Island (New York). CONUS, a technical term used by the U.S. Department of Defense, General Services Administration, NOAA/National Weather Service, and others, has been defined both as the continental United States and as the 48 contiguous states. The District of Columbia is not always specifically mentioned as being part of CONUS. OCONUS is derived from CONUS with O for outside added, thus referring to Outside of Continental United States. The term lower 48 is also used to refer to the conterminous United States. The National Geographic style guide recommends the use of contiguous or conterminous United States instead of lower 48 when the 48 states are meant, unless used in the context of Alaska. Almost all of Hawaii is south of the southernmost point of the conterminous United States in Florida. During World War II, the first four numbered Air Forces of the United States Army Air Forces (USAAF) were said to be assigned to the Zone of the Interior by the American military organizations of the time—the future states of Alaska and Hawaii, then each only organized incorporated territories of the Union, were respectively covered by the Eleventh Air Force and Seventh Air Force during the war.[citation needed] Terms used in the non-contiguous U.S. jurisdictions Residents of Alaska, Hawaii, and offshore U.S. territories have unique labels for the contiguous United States because of their own locations relative to it. The vast territory of Alaska became the 49th state of the United States on January 3, 1959. Alaska is the northwest extremity of the North American continent, separated from the U.S. West Coast by the Canadian province of British Columbia. The term Lower 48 has, for many years, been a common Alaskan equivalent for "contiguous United States"; some Alaskans may use the term Outside for those states, though some may use Outside to refer to any location not within Alaska. The territory of Hawaii, consisting of the entire Hawaiian Islands archipelago except for Midway Atoll,[a] became the 50th state of the United States on August 21, 1959. It is the southernmost U.S. state, and the latest one to join the Union. Not part of any continent, Hawaii is located in the Pacific Ocean, about 2,200 miles (3,541 km) from North America and almost halfway between North America and Asia. In Hawaii and overseas American territories, for instance, the terms the Mainland or U.S. Mainland are often used to refer to the 49 states in North America. Puerto Rico is an unincorporated territory of the United States located in the northeast Caribbean Sea, approximately 1,000 miles (1,609 km) southeast of Miami, Florida. Puerto Ricans born in Puerto Rico are U.S. citizens and are free to move to the mainland United States. The term Stateside Puerto Rican refers to residents of any U.S. state or the District of Columbia who were born in, or can trace their family ancestry to, Puerto Rico. The U.S. Virgin Islands is a U.S. territory located directly to the east of Puerto Rico in the Caribbean Sea. The term stateside is used to refer to the mainland, in relation to the U.S. Virgin Islands (see Stateside Virgin Islands Americans). American Samoa is a U.S. territory located in the South Pacific Ocean in Polynesia, south of the equator — it is 2,200 miles (3,500 km) southwest of Hawaii. In American Samoa, the contiguous United States is called the "mainland United States" or "the states"; those not from American Samoa are called palagi (outsiders). Non-contiguous areas within the contiguous United States Apart from offshore U.S. islands, a few continental portions of the contiguous United States are accessible by road only by traveling through Canada. Point Roberts, Washington; Elm Point, Minnesota, and two nearby points; the Northwest Angle in Minnesota; a peninsula in Osthus Lake in North Dakota's Rolette County and a slice of land on the edge of Lake Metigoshe in Bottineau County bordering Winchester, Canada, are seven such places. Alburgh, Vermont is not directly connected by land to the rest of the contiguous US, but is accessible by road via bridges from within Vermont and from New York, and nearby Province Point is accessible over land only from Canada, though no roads go there. In contrast, Hyder, Alaska, is physically part of contiguous Alaska and is its easternmost town, but the only practical overland access is by road through Canada. List of contiguous U.S. states The 48 contiguous states are: In addition, the District of Columbia is within the contiguous United States. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Percy_the_Park_Keeper] | [TOKENS: 156] |
Contents Percy the Park Keeper Percy the Park Keeper is a British animated children's television series based on the popular books by British author Nick Butterworth. The series is produced by HIT Entertainment with animation production by Grand Slamm Children's Films. The series initially ran as a series of four half-four specials that aired on CITV between December 1996 and December 1997 which were followed by a single series of thirteen ten-minute episodes that aired between September and December 1999. Characters and voice cast Episodes Other Material "After the Storm" got a theatre adaptation in London on Christmas 2015. The franchise eventually was commemorated with a statue of Percy in Raphael Park. Release HIT Entertainment released "After the Storm" on VHS in September 1998. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-231] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-78] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Corona_Borealis] | [TOKENS: 4603] |
Contents Corona Borealis Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by her in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den or a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern. The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946, and was predicted to do the same in 2025. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five stars in the constellation host Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster. Characteristics Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the IAU designated constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S.[a] It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between 15h 16.0m and 16h 25.1m , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere. Features The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas Uranometria. Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5.[b] Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun (M☉) and 57 times as luminous (L☉), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space. Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around 2.1 M☉, 2.6 solar radii (R☉), and 25.3 L☉. The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around 1.4 M☉, 1.56 R☉, and between 4 and 5 L☉. Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk. Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around 2.4 M☉ and has swollen to 7.4 R☉. It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core. Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries.[c] Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis started dimming in March 2023 and it is known that before it goes nova it dims for about a year; for this reason it was initially expected to go nova at any time between March and September, 2024. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 μm) ejected periodically. There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star. Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter (MJ) orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a 1.7 M☉ orange giant of spectral type K2III that has swollen to 21 R☉ and 151 L☉. Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lies a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at 2.5 MJ. The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of 0.83 MJ that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of 2.9 MJ. Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately 560 light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days. The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001. Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away.[d] Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters. Mythology In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternative version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. De astronomia, attributed to Hyginus, linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Its proximity to the constellations Hercules (which De astronomia reports was once attributed to Theseus, among others) and Lyra (Theseus' lyre in one account) could indicate that the three constellations were invented as a group. Corona Borealis was one of the 48 constellations mentioned in the Almagest of classical astronomer Ptolemy. In Mesopotamia, Corona Borealis was associated with the goddess Nanaya. In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as Darželis, the "flower garden". The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" (الفكة al-Fakkah), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as qaṣʿat al-masākīn (قصعة المساكين), or "the dish/bowl of the poor people". The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the Heavenly Sisters, who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as Mskegwǒm, the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris). Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it Na Kaua-ki-tokerau and probably Te Hetu. The constellation was likely called Kaua-mea in Hawaii, Rangawhenua in New Zealand, and Te Wale-o-Awitu in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called Ao-o-Uvea or Kau-kupenga. In Australian Aboriginal astronomy, the constellation is called womera ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as mullion wollai "eagle's nest", with Altair and Vega—each called mullion—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence. Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas Mercurii Philosophicii Firmamentum Firminianum Descriptionem by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled Corona Borealis in 2002. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Public_Security_(Israel)] | [TOKENS: 514] |
Contents Ministry of National Security (Israel) The Ministry of National Security (Hebrew: המשרד לביטחון לאומי, Arabic: وزارة الأمن القومي), formerly the Ministry of Internal Security and Ministry of Police, is a government agency of Israel. The Ministry of National Security is the statewide law enforcement agency and oversees the Israel Police, the Israel Prison Service and the Israel National Fire and Rescue Services, Israel Border Police, National Headquarters for the Protection of Children on the Internet, National Authority for Community Safety and the Authority for Witness Protection. The position has been held by Itamar Ben-Gvir since March 2025. History The Minister of National Security (Hebrew: שר לביטחון לאומי, Sar LeVitahon Leumi) is the political head of the ministry. Until 1995 the position was known as Minister of Police (Hebrew: שר המשטרה, Sar HaMishtara). The first Minister of Police, Bechor-Shalom Sheetrit, a former policeman, held this position from May 1948 until a month before his death in January 1967. He served in fourteen governments, making him the country's longest continually serving minister. The post was abolished after Menachem Begin became Prime Minister in 1977, but was reinstated in 1984 when Shimon Peres was elected. In December 2022, the position was renamed again, changing from Minister of Public Security (Hebrew: שר לביטחון פנים, Sar LeVitahon Pnim) to Minister of National Security. This move has been criticized as an unnecessary expense for taxpayers. In February 2024, Ben-Gvir's appointment as minister was challenged but Israel's High Court of Justice turned down the petition to nullify this appointment. Agencies Unit for Public Inquiries and Complaints The Unit for Public Inquiries and Complaints operates under the aegis of the Internal Audit Division of the Ministry of Internal Security. It handles complaints from citizens against the Israel Police, Prison Service, National Fire and Rescue Authority, Authority for the War on Drugs and Alcohol and Division for Licensing and Inspection of Firearms. In accordance with the Internal Audit Law, the main duties of the division are to ensure that audited entities abide by the law and carry out their duties in an efficient, financially sound, corruption-free manner. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Interrupt] | [TOKENS: 5305] |
Contents Interrupt In digital computers, an interrupt[a] is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume[b] normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error. Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking and system calls, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven. History Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. The UNIVAC 1103A computer is generally credited with the earliest use of interrupts in 1953. Earlier, on the UNIVAC I (1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts. Types Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture. A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS) or, if there is no OS, from the bare metal program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries. In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device. On some older systems, such as the 1964 CDC 3600, all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or more interrupt vector tables. To mask an interrupt is to disable it, so it is deferred[c] or ignored[d] by the processor, while to unmask an interrupt is to enable it. Processors typically have an internal interrupt mask register,[e] which allows selective enabling (and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are called maskable interrupts. Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called non-maskable interrupts (NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer. With regard to SPARC, the Non-Maskable Interrupt (NMI), despite having the highest priority among interrupts, can be prevented from occurring through the use of an interrupt mask. One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this. As an example, IBM Operating System/360 (OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device. A spurious interrupt is a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves. In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there will not be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker. A spurious interrupt may also be the result of electrical anomalies due to faulty circuit design, high noise levels, crosstalk, timing issues, or more rarely, device errata. A spurious interrupt may result in system deadlock or other undefined operation if the ISR does not account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting. A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler. A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed.[f] Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by the virtual memory system. Typically, the operating system kernel will catch and handle software interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of a page fault is to make the required page accessible in physical memory. But in other cases such as a segmentation fault the operating system executes a process callback. On Unix-like operating systems this involves sending a signal such as SIGSEGV, SIGBUS, SIGILL or SIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made using Structured Exception Handling with an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO. Intentional software interrupts for system calls result in calls to routines in the kernel to perform the function requested by the system call. In a kernel process, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, an operating system crash may result. The terms interrupt, trap, exception, fault, and abort are used to distinguish types of interrupts, although "there is no clear consensus as to the exact meaning of these terms". The term trap may refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions with trap in their names. In some usages, the term trap refers specifically to a breakpoint intended to initiate a context switch to a monitor program or debugger. It may also refer to a synchronous interrupt caused by an exceptional condition (e.g., division by zero, invalid memory access, illegal opcode), although the term exception is more common for this. x86 divides interrupts into (hardware) interrupts and software exceptions, and identifies three types of exceptions: faults, traps, and aborts. (Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity. A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction; one prominent use is to implement system calls. An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often[g] does not allow a restart of the program. ARM uses the term exception to refer to all types of interrupts, and divides exceptions into (hardware) interrupts, aborts, reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous. RISC-V uses interrupt as the overall term as well as for the external subset; internal interrupts are called exceptions. Triggering methods Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes. A level-triggered interrupt is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced. The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs. Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR. As previously described, a processor whose level-sensitive interrupt input is connected to a wired-OR circuit is susceptible to spurious interrupts, which should they occur, may cause deadlock or some other potentially-fatal system fault. An edge-triggered interrupt is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the transition was high-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level. Computers with edge-triggered interrupts may include an interrupt register that retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well. Processor response The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring: There are several different architectures for handling interrupts. In some, there is a single interrupt handler that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types, separate I/O channels or devices, or both. Several interrupt causes may have the same interrupt type and thus the same interrupt handler, requiring the interrupt handler to determine the cause. System implementation Interrupts may be fully handled in hardware by the CPU, or may be handled by both the CPU and another component such as a programmable interrupt controller or a southbridge. If an additional component is used, that component would be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space.[citation needed] In systems on a chip (SoC) implementations, interrupts come from different blocks of the chip and are usually aggregated in an interrupt controller attached to one or several processors (in a multi-core system). Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements. Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it will not interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed. The Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them. There are three ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions). Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts). Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent. Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts. Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time. A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system. A message-signaled interrupt does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write. Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge. Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required. Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines. PCI Express, a serial computer bus, uses message-signaled interrupts exclusively. In a push button analogy applied to computer systems, the term doorbell or doorbell interrupt is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc. The term doorbell interrupt is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one. Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities. In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts[i] (IPI). Performance Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm. There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as IRQ affinity) can be manually configured. A purely software-based implementation of the receiving traffic distribution, known as receive packet steering (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). Receive flow steering (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application. Typical uses Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps. Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, multiplexed displays, and GPIO outputs. A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead. Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family. For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed. This provides application software portability across the entire line. Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE). See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-Tester1987-201] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Goblin] | [TOKENS: 1971] |
Contents Goblin A goblin is a diminutive, grotesque, and often malevolent humanoid creature prominent in European folklore, typically characterized by its mischievous or demonic nature, small stature (around 30 cm in some traditions), furry or leathery appearance, and ability to shapeshift. Goblins are believed to dwell in subterranean areas or households, where they engage in acts ranging from pranks to murder. Their etymology derives from Old French gobelin (late 12th century), possibly linked to Medieval Latin gobelinus or Greek kobalos (meaning rogue or sprite), though some scholars trace it to earlier domestic protector spirits like the Germanic kobold, which were later demonized under Christian influence. Similar creatures include brownies, dwarves, duendes, gnomes, imps, leprechauns, and kobolds, but it is also commonly used as a blanket term for all small, fay creatures. The term is sometimes expanded to include goblin-like creatures of other cultures, such as the pukwudgie, dokkaebi, or ifrit. Etymology The term "goblin" entered English in the early 14th century, derived from the Anglo-Norman French gobelin or Old French gobelin, which was first attested in 1195 in the chronicle L'Estoire de la guerre sainte by the Norman monk Ambroise, where it described a treacherous figure. This French form traces back to Medieval Latin gobelinus, appearing around 1140 in Orderic Vitalis's Historia ecclesiastica, referring to a demon expelled from a church in Évreux. Scholars propose possible etymological connections to earlier languages, including Ancient Greek kóbalos, meaning "rogue" or "mischievous sprite", which may have influenced post-classical Latin forms like cobalus denoting a demon. Additionally, links have been suggested to Germanic kobold, a household spirit, potentially from Old High German elements meaning "room protector", though the precise relationship remains uncertain. Alternatively, it may be a diminutive or other derivative of the French proper name Gobel, more often Gobeau. Historical spellings of the term include gobelin (Old and Middle French), gobelinus (Medieval Latin), gobellin (Middle French, by 1506), and Middle English variants such as gobelyn around 1330. The adoption of "goblin" in early English literature was shaped by Norman folklore, where the term evoked domestic sprites or imps, as seen in Picard French goguelin for spirits haunting remote rooms, influencing its integration into medieval English texts like Wycliffe's Bible (late 14th century). The Welsh coblyn, a type of knocker, derives from the Old French gobelin via the English goblin. In folklore In English and Scottish folklore, goblins often appear as brownies, benevolent household spirits that perform domestic chores such as threshing grain, churning butter, or tending livestock during the night, provided they receive a small offering like porridge or milk left by the hearth. These creatures, typically depicted as small, shaggy-haired males dressed in ragged clothing, embody a symbiotic relationship with human households but are quick to abandon or turn mischievous if offered gifts of clothing or if their labor is criticized. In contrast, boggarts from Yorkshire traditions represent a more malevolent variant, functioning as vengeful familial spirits that attach to specific houses or farms, shapeshifting into animals or objects to perpetrate pranks, illness, or calamity upon those who slight them, often requiring rituals like relocation to appease their wrath. Welsh folklore features the púca as a solitary trickster goblin, akin to a puckish sprite that misleads nighttime wanderers along paths or into bogs, sometimes assuming animal forms like a goat or horse to amplify the deception, though it may also assist those who show respect by leaving offerings. In Irish mythology, the clurichaun appears as a goblin-like fairy with an affinity for alcohol, haunting cellars and breweries where it pilfers liquor, rides barrels like horses, and unleashes drunken fury on distillers who disturb its revels. French and Norman traditions portray lutins as impish goblins that frolic in stables, knotting horses' manes into fairy-locks for sport or covertly aiding with nighttime labors, their dual temperament shifting from playful to petty depending on human hospitality. The region of Évreux in northern France holds particular significance as a historical hub of goblin lore, where the 12th-century Orderic Vitalis describes the demon Gobelinus, a prototype for later goblin figures, haunting pagan sites and temples, expelled only through saintly intervention. Prominent narratives in European goblin traditions include tales of fairy markets from folklore, where spectral merchants peddle illusory fruits and wares to ensnare the unwary, symbolizing temptation and otherworldly commerce in rural traditions, later popularized in literature such as Christina Rossetti's Goblin Market (1862). Redcap legends from Anglo-Scottish border lore depict these ferocious goblins as squat, iron-shod murderers dwelling in forsaken border towers, who slay wayfarers with their pikestaffs and soak their knitted caps in the spilled blood to maintain their vivid hue, fleeing only from consecrated objects or swift escapees. Korean folklore features the dokkaebi, horned tricksters animated from discarded household tools like brooms or rice bowls through spiritual possession, wielding magical clubs (bangmangi) to enforce games or punishments on humans. Unlike purely malevolent entities, dokkaebi often reward clever individuals with treasures after riddles or wrestling matches. Among African cultures, the Zulu tokoloshe embodies an evil sprite summoned by sangomas to inflict misfortune, illness, or nocturnal terror, particularly on children whom it scratches or devours. This hairy, diminutive water spirit, capable of invisibility and shape-shifting, is warded off by elevating beds on bricks to exploit its short stature. In Egyptian and broader Middle Eastern lore, certain jinn exhibit goblin-like prankster qualities, such as misplacing items, mimicking voices to deceive travelers, or creating illusory disturbances in homes. These shape-shifting spirits are invisible to humans unless they choose otherwise. Indigenous American traditions include the Wampanoag pukwudgie, porcupine-quilled tricksters who wield poison arrows to mislead or injure humans in forested areas, originally benevolent guides turned vengeful after perceived slights by the Creator. These knee-high, gray-skinned beings use illusions to lure victims off paths. In fiction In J. R. R. Tolkien's The Hobbit the evil creatures living in the Misty Mountains are referred to as goblins. In The Lord of the Rings, the same creatures are primarily referred to as orcs where the goblin name was used for the lesser orcs. Goblinoids are a category of humanoid legendary creatures related to the goblin. The term was popularized in the Dungeons & Dragons fantasy role-playing game, in which goblins and related creatures are a staple of random encounters. Goblinoids are typically barbaric foes of the various human and "demi-human" races. Even though goblinoids in modern fantasy fiction are derived from J. R. R. Tolkien's orcs, the main types of goblinoids in Dungeons & Dragons are goblins, bugbears and hobgoblins; these creatures are also figures of mythology, next to ordinary goblins. In the Harry Potter book series and the shared universe in which its film adaptations are set, goblins are depicted as strange, but civilised, humanoids who often serve as bankers or craftsmen. In Terry Pratchett's Discworld series, goblins are initially a despised and shunned subterranean race; however, in later books, goblins are eventually integrated with the other races, and their mechanical and engineering talents come to be valued. The Green Goblin is a well-known supervillain, one of the archenemies of Spider-Man, who has various abilities including enhanced stamina, durability, agility, reflexes and superhuman strength due to ingesting a substance known as the "Goblin Formula". He has appeared in various Spider-Man related media, such as comics, television series, video games, and films, including Spider-Man (2002) and Spider-Man: No Way Home (2021) as Norman Osborn, and Spider-Man 3 (2007) and The Amazing Spider-Man 2 (2014) as Harry Osborn. There have been other goblin-related characters like Hobgoblin, Grey Goblin, and Menace. In the video game series Elder Scrolls, goblins are a hostile beast race said to originate from Summerset Isle, can range in size from being smaller than a Wood Elf to being larger than a Nord and love living in dank places such as caves and sewers. In early English translations, The Smurfs were called goblins. The McDonald's Fry Guys were called Gobblins in earlier McDonaldland advertisements. The Goosebumps franchise had a Goosebumps House of Shivers book called Goblin Monday which featured the goblins. They are depicted as short creatures with green fur, horns, pointy ears and cat-like eyes who assume human form to trick humans. In addition, the goblins can't deal with nutmeg as it is their only weakness. Goblin-related place names See also References Bibliography |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Corvus_(constellation)] | [TOKENS: 4495] |
Contents Corvus (constellation) Corvus is a small constellation in the Southern Celestial Hemisphere. Its name means "crow" in Latin. One of the 48 constellations listed by the 2nd-century astronomer Ptolemy, it depicts a raven, a bird associated with stories about the god Apollo, perched on the back of Hydra the water snake. The four brightest stars, Gamma, Delta, Epsilon, and Beta Corvi, form a distinctive quadrilateral or cross-shape in the night sky. With an apparent magnitude of 2.59, Gamma Corvi—also known as Gienah—is the brightest star in the constellation. It is an aging blue giant around four times as massive as the Sun. The young star Eta Corvi has been found to have two debris disks. Three star systems have exoplanets, and a fourth planetary system is unconfirmed. TV Corvi is a dwarf nova—a white dwarf and brown dwarf in very close orbit. History and mythology In the Babylonian star catalogues dating from at least 1100 BCE, what later became known as Corvus was called the Raven (MUL.UGA.MUSHEN). As with more familiar Classical astronomy, it was placed sitting on the tail of the Serpent (Greek Hydra). The Babylonian constellation was sacred to Adad, the god of rain and storm; in the second Millennium BCE it would have risen just before the autumnal rainy season. John H. Rogers observed that Hydra signified Ningishzida, the god of the underworld in the Babylonian compendium MUL.APIN. He proposed that Corvus and Crater (along with Hydra) were death symbols and marked the gate to the underworld. These two constellations, along with the eagle Aquila and the fish Piscis Austrinus, were introduced to the Greeks around 500 BCE; they marked the winter and summer solstices respectively. Furthermore, Hydra had been a landmark as it had straddled the celestial equator in antiquity. Corvus and Crater also featured in the iconography of Mithraism, which is thought to have been of middle-eastern origin before spreading into Ancient Greece and Rome. Corvus is associated with the myth of Apollo and his lover Coronis the Lapith. Coronis had been unfaithful to Apollo; when he learned this information from a pure white crow (or raven in some versions, called Lycius), he turned its feathers black in a fit of rage. Another legend associated with Corvus is that a crow stopped on his way to fetch water for Apollo, to eat figs. Instead of telling the truth to Apollo, he lied and said that a snake, Hydra, kept him from the water, while holding a snake in his talons as proof. Apollo, realizing this was a lie, flung the crow (Corvus), cup (Crater), and snake (Hydra) into the sky. He further punished the wayward bird by ensuring it would forever be thirsty, both in real life and in the heavens, where the Cup is just out of reach. In Chinese astronomy, the stars of Corvus are located within the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què). The four main stars depict a chariot, Zhen, which is the 28th and final lunar mansion; Alpha and Eta mark the linchpins for the wheels, and Zeta is Changsha, a coffin. In Indian astronomy, the five main stars of Corvus represent a hand or fist corresponding to the Hasta, the 13th nakshatra or lunar mansion. Corvus was recognized as a constellation by several Polynesian cultures and used as a guide for ocean navigation. In the Marquesas Islands, it was called Mee; in Pukapuka, it was called Te Manu, and in the Society Islands, it was called Metua-ai-papa. To Torres Strait Islanders, Corvus was the right hand (holding kupa fruit) of the huge constellation Tagai, a man fishing. The Bororo people of Mato Grosso in central Brazil regarded the constellation as a land tortoise Geriguigui, while the Tucano people of the northwestern Amazon region saw it as an egret. To the Tupi people of São Luís Island in Brazil, Corvus might have been seen as a grill or barbecue—seychouioura, on which fish were grilled. The depiction could have also referred to the Great Square of Pegasus. Characteristics Covering 184 square degrees and hence 0.446% of the sky, Corvus ranks 70th of the 88 constellations in area. It is bordered by Virgo to the north and east, Hydra to the south, and Crater to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Crv". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930,[a] are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between 11h 56m 22s and 12h 56m 40s, while the declination coordinates are between −11.68° and −25.20°. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 65°N.[b] Features The German cartographer Johann Bayer used the Greek letters Alpha through Eta to label the most prominent stars in the constellation. John Flamsteed gave nine stars Flamsteed designations, while one star he designated in the neighbouring constellation Crater—31 Crateris—lay within Corvus once the constellation boundaries were established in 1930. Within the constellation's borders, there are 29 stars brighter than or equal to apparent magnitude 6.5.[c] Four principal stars, Delta, Gamma, Epsilon, and Beta Corvi, form a quadrilateral asterism known as "the "Spica's Spanker" or "the Sail". Although none of the stars are particularly bright, they lie in a dim area of the sky, rendering the asterism easy to distinguish in the night sky. Gamma and Delta serve as pointers toward Spica. Also called Gienah, Gamma is the brightest star in Corvus at magnitude 2.59. Its traditional name means "wing", the star marking the left wing in Bayer's Uranometria. 154±1 light-years from Earth, it is a blue-white hued giant star of spectral type B8III that is 4.2+0.4−0.3 times as massive, and 355 times as luminous as the Sun. Around 160+40−30 million years old, it has largely exhausted its core hydrogen and begun expanding and cooling as it moves away from the main sequence. A binary star, it has a companion orange or red dwarf star of spectral type K5V to M5V that is about 0.8 times as massive as the Sun. Around 50 astronomical units[d] distant from Gamma Corvi A, it is estimated to complete an orbit in 158 years. Delta Corvi, traditionally called Algorab, is a double star divisible in small amateur telescopes. The primary is a blue-white star of magnitude 2.9, around 87 light-years from Earth. An enigmatic star around 2.7 times as massive as the Sun, it is more luminous (65–70 times that of the Sun) than it should be for its surface temperature of 10,400 K, and hence is either a 3.2 million year-old very young pre-main sequence star that has not settled down to a stable main sequence life stage, or a 260-million-year-old star that has begun to exhaust its core hydrogen and expand, cool and shine more brightly as it moves away from the main sequence. Its spectral type is given as A0IV, corresponding with the latter scenario. Warm circumstellar dust—by definition part of its inner stellar system—has been detected around Delta Corvi A. Delta Corvi B is an orange dwarf star of magnitude 8.51 and spectral class K, also surrounded by circumstellar dust. A post T-tauri star, it is at least 650 AU distant from its brighter companion and takes at least 9400 years to complete an orbit. Delta Corvi's common name means "the raven". It is one of two stars marking the right wing. Located 4.5 degrees northeast of Delta Corvi is Struve 1669, a binary star that is divisible into two stars 5.4" apart by small amateur telescopes, 280 light-years from Earth. The pair, both white stars, are visible to the naked eye at magnitude 5.2; the primary is of magnitude 5.9 and the secondary is of magnitude 6.0. The raven's breast is marked by Beta Corvi (the proper name is Kraz), a star of magnitude 2.7 located 146 ± 1 light-years from Earth. Roughly 206 million years old and 3.7 ± 1 times as massive as the Sun, it has exhausted its core hydrogen and expanded and cooled to a surface temperature of around 5,100 K and is now a yellow bright giant star of spectral type G5II. It likely spent most of its existence as a blue-white main sequence star of spectral type B7V. Bearing the proper name of Minkar and marking the raven's nostril is Epsilon Corvi, located some 318 ± 5 light-years from Earth. It is a red giant of spectral type K2III that is around 54 times the Sun's radius and 930 times its luminosity. Around 4 times as massive as the Sun, it spent much of its life as a main-sequence star of spectral type B5V. Lying to the south of the quadrilateral between Beta and Epsilon Corvi is the orange-hued 6 Corvi, an ageing giant star of spectral type K1III that is around 70 times as luminous as the Sun. It is 331 ± 10 light-years away from Earth. Named Alchiba, Alpha Corvi is a white-hued star of spectral type F1V and magnitude 4.0, 48.7 ± 0.1 light-years from Earth. It exhibits periodic changes in its spectrum over a three-day period, which suggests it is either a spectroscopic binary or (more likely) a pulsating Gamma Doradus-type variable. If the latter is the case, it is estimated to be 1.39 times as massive as the Sun. According to Bayer's atlas, it lies above the bird's beak. Marking the raven's right wing is Eta Corvi, a yellow-white main-sequence star of type F2V that is 1.52 times as massive and 4.87 times as luminous as the Sun. It is 59 light-years distant from the Solar System. Two debris disks have been detected orbiting this star, one warm within 3.5 astronomical units and another out at ~150 astronomical units distant. Zeta Corvi marks the raven's neck. It is of apparent magnitude 5.21, separated by 7 arcseconds from the star HR 4691. Located 420 ± 10 light-years distant, it is a blue-white Be star of spectral type B8V, the presence of hydrogen emission lines in its spectrum indicating it has a circumstellar disc. These stars may be an optical double or a true multiple star system, with a separation of at least 50,000 astronomical units and the stars taking 3.5 million years to orbit each other. HR 4691 is itself double, composed of an ageing yellow-orange giant whose spectral type has been calculated at K0 or G3, and an F-type main-sequence star. 31 Crateris (which was originally placed in Crater by Flamsteed) is a 5.26 magnitude star which was once mistaken for a moon of Mercury. On 27 March 1974, the Mariner 10 mission detected emissions in the far ultraviolet from the planet (suggesting a satellite), but they were found to emanate from the star. It is in reality a remote binary star system with a hot blue-white star of spectral type B1.5V and a companion about which little is known. The two stars orbit each other every 2.9631 days. The primary is possibly a blue straggler of the Hyades group. The primary is around 15.5 times as massive as the Sun and 52262 times as luminous. VV Corvi is a close spectroscopic binary, its two component stars orbiting each other with a period of 1.46 days. Both are yellow-white main-sequence stars of spectral type F5V, though the primary has begun expanding and cooling as it nears the end of its time on the main sequence. The mass ratio of the two stars is 0.775 ± 0.024. A tertiary companion was discovered during the Two Micron All-Sky Survey. W Corvi is an eclipsing binary that varies in brightness from apparent magnitude 11.16 to 12.5 over 9 hours. Its period has increased by 1/4 second over a century. It is an unusual system in that its two stars are very close to each other yet have different surface temperatures, and hence thermal transfer is not taking place as expected. SX Corvi is an eclipsing binary that is also a contact binary known as a W Ursae Majoris variable. The two component stars orbit closely enough to each other for mass to have been transferred between them—in this case the secondary having transferred a large amount of mass to the primary. RV Corvi is another eclipsing binary. Its brightness varies from apparent magnitude 8.6 to 9.16 over 18 hours. The system is composed of stars of spectral types F0 and G0, which orbit each other every 0.7473 days. Close to Gamma Corvi and visible in the same binocular field is R Corvi, a long period (Mira) variable star. It ranges in brightness from a magnitude of 6.7 to 14.4 with a period of approximately 317 days. TT Corvi is a semiregular variable red giant of spectral type M3III and apparent magnitude 6.48 around 923 light years distant. It is around 993 times as luminous as the Sun. TU Corvi is a Delta Scuti variable—a class of short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. It varies by 0.025 of a magnitude around apparent magnitude 6.53 over 59 minutes. Three star systems have confirmed planets. HD 103774 is a young yellow-white main-sequence star of apparent magnitude 7.12 that is 181 ± 5 light-years distant from Earth. It is 1.335 ± 0.03 times as massive and 3.5 ± 0.3 as luminous as the Sun. Variations in its radial velocity showed it was being orbited by a Neptune-sized planet every 5.9 days in 2013. HD 104067 is an orange dwarf of spectral type K2V of apparent magnitude 7.93 that is 69 ± 1 light-years distant from Earth. Around 80% as massive as the Sun, it is orbited by a planet 3.6 times the mass of Neptune every 55.8 days. WASP-83 has a planet around as massive as Saturn that orbits it every 5 days. It was discovered by its transit across the star in 2015. A fourth star system has an unconfirmed planet. HD 111031 is a sunlike star of spectral type G5V located 101 ± 2 light-years distant from Earth. Ross 695 is a red dwarf star located a mere 28.9 ± 0.6 light-years distant from Earth. At apparent magnitude 11.27, it is much too faint to be seen with the unaided eye. A small star, it has around 23% the mass and radius of the Sun, but only 0.7% its luminosity. VHS J1256–1257 is a triple system of young brown dwarfs located 72.4+3.6−3.9 light-years distant from Earth. The system consists of a central, equal-mass binary system of late-M spectral type dwarfs and an outer, planetary-mass brown dwarf companion that is widely separated at 102 ± 9 AU. DENIS-P J1228.2-1547 is a system composed of two brown dwarfs orbiting each other located 73 ± 3 light-years away from Earth. TV Corvi is a dwarf nova composed of a white dwarf and brown dwarf that orbit each other every 90 minutes. The system has a baseline magnitude of 17 that brightens periodically to magnitude 12, discovered by Clyde Tombaugh in 1931 and David Levy in 1990 and 2005. Corvus contains no Messier objects. It has several galaxies and a planetary nebula observable with amateur telescopes. The center of Corvus is home to a planetary nebula, NGC 4361. The nebula itself resembles a small elliptical galaxy and has a magnitude of 10.3, but the magnitude 13 star at its centre gives away its true nature. Corvus also contains the Stargate (asterism). The NGC 4038 Group is a group of galaxies across Corvus and Crater. The group may contain between 13 and 27 galaxies. The best-known member is the Antennae peculiar galaxy, located 0.25 north of 31 Crateris. It consists of two interacting galaxies—NGC 4038 and 4039—that appear to have a heart shape as seen from Earth. The name originates from the huge tidal tails that come off the ends of the two galaxies, formed because of the spiral galaxies' original rotation. Both original galaxies were spiral galaxies and are now experiencing extensive star formation due to the interaction of gas clouds. The galaxies are 45 million light-years from Earth and each has multiple ultraluminous X-ray sources, the source of which is unknown. Astronomers theorize that they may be a rare type of x-ray emitting binary stars or intermediate-mass black holes. The Antennae Galaxies appear in a telescope at the 10th magnitude. SN 2004gt was a type Ic supernova that erupted on December 12, 2004. The progenitor was not identified from older images of the galaxy, and is either a type WC Wolf–Rayet star with a mass over 40 times that of the Sun, or a star 20 to 40 times as massive as the Sun in a binary star system. SN 2007sr was a Type Ia supernova event that peaked in brightness on December 14, 2007. The galaxy has been identified as a good place to take detailed images in case of further supernovae. NGC 4027 is another member of the NGC 4038 group, notable for its extended spiral arm. Known as the Ringtail Galaxy, it lies close to 31 Crateris. A barred spiral galaxy, its distorted shape is probably due to a past collision, possibly with the nearby NGC 4027A. NGC 4782 and NGC 4783 are a pair of merging elliptical galaxies in the northeastern part of the constellation, around 200 million light-years distant. Two established meteor showers originate from within Corvus' boundaries. German astronomer Cuno Hoffmeister discovered and named the Corvids in 1937, after observing them between June 25 and July 2. They have not been seen since, nor was there evidence of a shower when previous records were examined. Hoffmeister noted the trajectory of the shower was similar to that of the comet 11P/Tempel–Swift–LINEAR, though this was not confirmed by Zhukov and colleagues in 2011. The shower has been tentatively linked with 4015 Wilson–Harrington. In January 2013, the MO Video Meteor Network published the discovery of the Eta Corvids, assigning some 300 meteors seen between January 20 and 26. Their existence was confirmed by data analysis later that year. Popular culture In 1624, German astronomer Jakob Bartsch equated the constellation Argo Navis with Noah's Ark, linking Corvus and Columba to the crow and dove that feature in the story in Genesis. In Action Comics #14 (January 2013), which was published 7 November 2012, astrophysicist Neil deGrasse Tyson appears in the story, in which he determines that Superman's home planet, Krypton, orbited the red dwarf LHS 2520 in the constellation Corvus, 27.1 light-years from Earth. Tyson assisted DC Comics in selecting a real-life star that would be an appropriate parent star to Krypton, and picked the star in Corvus, the mascot of Superman's high school, the Smallville Crows. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Antipatris#History] | [TOKENS: 2142] |
Contents Antipatris Antipatris /ænˈtɪpətrɪs/ (Hebrew: אנטיפטריס, Ancient Greek: Αντιπατρίς) was a city built during the first century BC by Herod the Great, who named it in honour of his father, Antipater. The site, now a national park in central Israel, was inhabited from the Chalcolithic period to the Late Roman period. The remains of Antipatris are known in Modern Hebrew as Tel Afek (תל אפק), and in Arabic as Khulat Rās al-‘Ayn ('castle of the head of the spring'), after the nearby riverhead of the Yarkon. It has been identified as either the tower of Aphek mentioned by Josephus, or the biblical Aphek, best known from the story of the Battle of Aphek. During the Crusader period the site was known as Surdi fontes, "Silent springs". The Ottoman fortress known as Binar Bashi or Ras al-Ayn was built there in the 16th century. Antipatris/Tel Afek lies at the strong perennial springs of the Yarkon River, which throughout history has created an obstacle between the hill country to the east and the Mediterranean to the west, forcing travellers and armies to pass through the narrow Afek Pass between the springs and the foothills of Samaria. This gave the location of Antipatris/Tel Afek its strategic importance. Antipatris was situated on the Roman road from Caesarea Maritima to Jerusalem, north of the town of Lod where the road turned eastwards towards Jerusalem. During the British Mandate, a water pumping station was built there to channel water from the Yarkon to Jerusalem. Today the remains of Antipatris are located roughly between Petah Tikva and the towns of Kafr Qasim and Rosh HaAyin (literally "headspring"), south of Hod HaSharon. History The Bronze Age saw the construction of defensive walls, 2.5 metres (8.2 ft) to 3.5 metres (11 ft) wide, and a series of palaces. One of these is described as an Egyptian governor residence of the 15th century BC, and within, an array of cuneiform tablets were found. Philistine ware is found in the site in 12th century BC layers. Most scholars agree that there were more than one Aphek. While Tel-Aphek (Antipatris) is one of them, C.R. Conder identified the Aphek of Eben-Ezer with a ruin (Khirbet) some 3.7 miles (6 km) distant from Dayr Aban (believed to be Eben-Ezer), and known by the name Marj al-Fikiya; the name al-Fikiya being an Arabic corruption of Aphek. Eusebius, when writing about Eben-ezer in his Onomasticon, says that it is "the place from which the Gentiles seized the Ark, between Jerusalem and Ascalon, near the village of Bethsamys (Beit Shemesh)," a locale that corresponds with Conder's identification. The historian Josephus mentions a certain tower called Aphek, not far from Antipatris, and which was burnt by a contingent of Roman soldiers. Antipatris was a city built by Herod the Great, and named in honor of his father, Antipater II of Judea. It lay between Caesarea Maritima and Lydda, on the great Roman road from Caesarea to Jerusalem, and figures prominently in Roman-era history. Today, the nearby river bears the town's old namesake in Arabic (Arabic: نهر أبو فطرس, romanized: Nahr Abū Fiṭrus). According to Josephus, Antipatris was built on the site of an older town that was formerly called Chabarzaba (Hebrew: כפר סבא), a place so-named in classical Jewish literature and in the Mosaic of Rehob. During the outbreak of the Jewish war with Rome in 66 CE, the Roman army under Cestius was routed as far as Antipatris. Paul the Apostle was brought by night from Jerusalem to Antipatris and next day from there to Caesarea Maritima, to stand trial before the governor Antonius Felix; see Acts of the Apostles 23:31-32 In 363, the city was badly damaged by an earthquake.[citation needed] Only one of the early bishops of the Christian bishopric of Antipatris, a suffragan of Caesarea, is mentioned by name in extant documentation: Polychronius, who was present both at the Robber Council of Ephesus in 449 and the Council of Chalcedon in 451. No longer a residential bishopric, Antipatris is today listed by the Catholic Church as a titular see. On 27 April 750, the Abbasid general Abd Allah ibn Ali, uncle of Caliph al-Saffah (r. 750–754), marched to Antipatris ('Abu Futrus'). There, he summoned around eighty members of the Umayyad dynasty, whom the Abbasids had toppled earlier that year, with promises of fair surrender terms, only to have them massacred. Ottoman records indicate that a Mamluk fortress may have stood on the site. However, the Ottoman fortress was built following the publication of a firman in AD 1573 (981 H.): "You have sent a letter and have reported that four walls of the fortress Ras al-Ayn have been built, [..] I have commanded that when [this firman] arrives you shall [..have built] the above mentioned rooms and mosque with its minaret and have the guards remove the earth outside and clean and tidy [the place]. The Turkish name of the place and fortress, pınar başı, means "fountain-head" or simply "head of the springs", much like the Arabic and Hebrew names (Ras al-Ayin and Rosh ha-Ayin, "head of the springs"). Pronounced by Arabic-speakers, it became "Binar Bashi" (Arabic has no "p"). The fortress was built to protect a vulnerable stretch of the Cairo-Damascus highway (the Via Maris), and was provided with 100 horsemen and 30 foot soldiers. The fortress was also supposed to supply soldiers to protect the hajj route. The fortress is a massive rectangular enclosure with four corner towers and a gate at the centre of the west side. The south-west tower is octagonal, while the three other towers have a square ground plan. It appeared named Chateau de Ras el Ain on the map that Pierre Jacotin compiled in 1799. The Arab peasants deserted the village in the 1920s. Currently, the site of Antipatris is included in the national park "Yarkon-Tel Afek", under the jurisdiction of the Israel Nature and Parks Authority, incorporating the area of the Ottoman fortress, the remains of the Roman city and the British water pumping station. Excavation The earliest winepresses discovered to date in the Southern Levant were excavated adjoining the governor's residency at Tel Aphek, dated to the 13th century BC, the reign of Ramesses II. The two winepresses were plastered and possessed two treading floors (Hebrew: gat elyonah, “upper vat”) in parallel configuration extending over 6 m². Beneath and next to these, the stone-lined plastered collection vats (Hebrew: gat tahtonah, “lower vat”) could each store over 3 m³, or 3,000 litres, of pressed grape juice. Canaanite amphorae were recovered still in situ at the bottom of each pit, while a midden of grape skins, seeds and other debris was discovered adjacent to the installations [Kochavi 1981:81]. The excavator has drawn attention to the proximity of these winepresses to the Residency, their large size and the fact that ancient winepresses were normally located outside settlements amongst the vineyards suggesting that the Egyptian administration supervised the viniculturists of the Sharon closely [Kochavi 1990:XXIII]. It is clear that Tel Aphek was a site not only at the centre of imperial administration, but also well-connected to the international trade in luxury goods, as reflected in the abundant finds of Cypriot and Mycenaean ceramics. Illustrative of Cypro-Canaanite trade especially is a fragmentary amphora handle [Aphek 5/29277], clearly inscribed after firing with Sign 38 of the Cypro-Minoan Linear Script [Yasur-Landau and Goren 2004]. The handle was excavated from secondary deposition in Aphek Area X, Locus 2953, belonging to the very meagre Stratum X11 built over the Governor's Residency. An extreme likelihood exists, therefore, that the object belonged to the earlier, more prosperous Stratum XI2 of the Residency itself. Given the as-yet-undeciphered nature of the script, the precise significance of the post-firing addition of a Cypro-Minoan sign must remain uncertain. At minimum the sign indicates that individuals employing Cypro-Minoan script handled the vessel from which the handle derived. Combined with petrographic analysis of the clay employed in manufacturing the amphora—pointing to an origin in or within the vicinity of Akko—the readiest reconstruction from the evidence must be that the vessel (and any companions) was manufactured in the Akko region before shipping, either to such redistribution points as Tell Abu Hawam or Tel Nami, or (more likely) to Cyprus itself (perhaps via one of these ports), where it was likely emptied of its original contents—certainly marked—before being shipped back to the Levant (now probably containing Cypriot product) and achieving final deposition at Aphek. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Transport_Layer_Security] | [TOKENS: 10374] |
Contents Transport Layer Security Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network, such as the Internet. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible. The TLS protocol aims primarily to provide security, including privacy (confidentiality), integrity, and authenticity through the use of cryptography, such as the use of certificates, between two or more communicating computer applications. It runs in the presentation layer and is itself composed of two layers: the TLS record and the TLS handshake protocols. The closely-related Datagram Transport Layer Security (DTLS) is a communications protocol that provides security to datagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions. TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecated SSL (Secure Sockets Layer) specifications (1994, 1995, 1996) developed by Netscape Communications for adding the HTTPS protocol to their Netscape Navigator web browser. Description Client–server applications use the TLS protocol to communicate across a network in a way designed to prevent eavesdropping and tampering. Since applications can communicate either with or without TLS (or SSL), it is necessary for the client to request that the server set up a TLS connection. One of the main ways of achieving this is to use a different port number for TLS connections. Port 80 is typically used for unencrypted HTTP traffic while port 443 is the common port used for encrypted HTTPS traffic. Another mechanism is to make a protocol-specific STARTTLS request to the server to switch the connection to TLS – for example, when using some mail and news protocols. Once the client and server have agreed to use TLS, they negotiate a stateful connection by using a handshaking procedure (see § TLS handshake). The protocols use a handshake with an asymmetric cipher to establish not only cipher settings but also a session-specific shared key with which further communication is encrypted using a symmetric cipher. During this handshake, the client and server agree on various parameters used to establish the connection's security: This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fails, then the TLS handshake fails and the connection is not created. Note that TLS 1.3 only allows key exchange algorithms providing forward secrecy. Consequently, establishing a PreMasterSecret using the server's public and private key is only available in TLS 1.2 and below. TLS and SSL do not fit neatly into any single layer of the OSI model or the TCP/IP model. TLS runs "on top of some reliable transport protocol (e.g., TCP),": §1 which would imply that it is above the transport layer. It serves encryption to higher layers, which is normally the function of the presentation layer. However, applications generally use TLS as if it were a transport layer, even though applications using TLS must actively control initiating TLS handshakes and handling of exchanged authentication certificates.: §1 When secured by TLS, connections between a client (e.g., a web browser) and a server (e.g., wikipedia.org) will have all of the following properties:: §1 TLS supports many different methods for exchanging keys, encrypting data, and authenticating message integrity. As a result, secure configuration of TLS involves many configurable parameters, and not all choices provide all of the privacy-related properties described in the list above (see the tables below § Key exchange, § Cipher security, and § Data integrity). Attempts have been made to subvert aspects of the communications security that TLS seeks to provide, and the protocol has been revised several times to address these security threats. Developers of web browsers have repeatedly revised their products to defend against potential security weaknesses after these were discovered (see TLS/SSL support history of web browsers). Datagram Transport Layer Security, abbreviated DTLS, is a related communications protocol providing security to datagram-based applications by allowing them to communicate in a way designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. However, unlike TLS, it can be used with most datagram oriented protocols including User Datagram Protocol (UDP), Datagram Congestion Control Protocol (DCCP), Control And Provisioning of Wireless Access Points (CAPWAP), Stream Control Transmission Protocol (SCTP) encapsulation, and Secure Real-time Transport Protocol (SRTP). As the DTLS protocol datagram preserves the semantics of the underlying transport, the application does not suffer from the delays associated with stream protocols. However, the application has to deal with packet reordering, loss of datagram and data larger than the size of a datagram network packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids the TCP meltdown problem, when being used to create a VPN tunnel. The original 2006 release of DTLS version 1.0 was not a standalone document. It was given as a series of deltas to TLS 1.1.: §4 Similarly the follow-up 2012 release of DTLS is a delta to TLS 1.2. It was given the version number of DTLS 1.2 to match its TLS version. Lastly, the 2022 DTLS 1.3 is a delta to TLS 1.3. Like the two previous versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability". Many VPN clients including Cisco AnyConnect & InterCloud Fabric, OpenConnect, ZScaler tunnel, F5 Networks Edge VPN Client, and Citrix Systems NetScaler use DTLS to secure UDP traffic. In addition all modern web browsers support DTLS-SRTP for WebRTC. History and development In August 1986, the National Security Agency, the National Bureau of Standards, the Defense Communications Agency launched a project, called the Secure Data Network System (SDNS), with the intent of designing the next generation of secure computer communications network and product specifications to be implemented for applications on public and private internets. It was intended to complement the rapidly emerging new OSI internet standards moving forward both in the U.S. government's GOSIP Profiles and in the huge ITU-ISO JTC1 internet effort internationally. As part of the project, researchers designed a protocol called SP4 (security protocol in layer 4 of the OSI system). This was later renamed the Transport Layer Security Protocol (TLSP) and subsequently published in 1995 as international standard ITU-T X.274|ISO/IEC 10736:1995. Despite the name similarity, this is distinct from today's TLS. Other efforts towards transport layer security included the Secure Network Programming (SNP) application programming interface (API), which in 1993 explored the approach of having a secure transport layer API closely resembling Berkeley sockets, to facilitate retrofitting pre-existing network applications with security measures. SNP was published and presented in the 1994 USENIX Summer Technical Conference. The SNP project was funded by a grant from NSA to Professor Simon Lam at UT-Austin in 1991. Secure Network Programming won the 2004 ACM Software System Award. Simon Lam was inducted into the Internet Hall of Fame for "inventing secure sockets in 1991 and implementing the first secure sockets layer, named SNP, in 1993." Netscape developed the original SSL protocols, and Taher Elgamal, chief scientist at Netscape Communications from 1995 to 1998, has been described as the "father of SSL". SSL version 1.0 was never publicly released because of serious security flaws in the protocol. Version 2.0, after being released in February 1995 was quickly found to contain a number of security and usability flaws. It used the same cryptographic keys for message authentication and encryption. It had a weak MAC construction that used the MD5 hash function with a secret prefix, making it vulnerable to length extension attacks. It also provided no protection for either the opening handshake or an explicit message close, both of which meant man-in-the-middle attacks could go undetected. Moreover, SSL 2.0 assumed a single service and a fixed domain certificate, conflicting with the widely used feature of virtual hosting in Web servers, so most websites were effectively impaired from using SSL. These flaws necessitated the complete redesign of the protocol to SSL version 3.0. Released in 1996, it was produced by Paul Kocher working with Netscape engineers Phil Karlton and Alan Freier, with a reference implementation by Christopher Allen and Tim Dierks of Certicom. Newer versions of SSL/TLS are based on SSL 3.0. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101. SSL 2.0 was deprecated in 2011 by RFC 6176. In 2014, SSL 3.0 was found to be vulnerable to the POODLE attack that affects all block ciphers in SSL; RC4, the only non-block cipher supported by SSL 3.0, is also feasibly broken as used in SSL 3.0. SSL 3.0 was deprecated in June 2015 by RFC 7568. TLS 1.0 was first defined in RFC 2246 in January 1999 as an upgrade of SSL Version 3.0, and written by Christopher Allen and Tim Dierks of Certicom. As stated in the RFC, "the differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough to preclude interoperability between TLS 1.0 and SSL 3.0". Tim Dierks later wrote that these changes, and the renaming from "SSL" to "TLS", were a face-saving gesture to Microsoft, "so it wouldn't look [like] the IETF was just rubberstamping Netscape's protocol". The PCI Council suggested that organizations migrate from TLS 1.0 to TLS 1.1 or higher before June 30, 2018. In October 2018, Apple, Google, Microsoft, and Mozilla jointly announced they would deprecate TLS 1.0 and 1.1 in March 2020. TLS 1.0 and 1.1 were formally deprecated in RFC 8996 in March 2021. TLS 1.1 was defined in RFC 4346 in April 2006. It is an update from TLS version 1.0. Significant differences in this version include: Support for TLS versions 1.0 and 1.1 was widely deprecated by web sites around 2020, disabling access to Firefox versions before 24 and Chromium-based browsers before 29, though third-party fixes can be applied to Netscape Navigator and older versions of Firefox to add TLS 1.2 support. TLS 1.2 was defined in RFC 5246 in August 2008. It is based on the earlier TLS 1.1 specification. Major differences include: All TLS versions were further refined in RFC 6176 in March 2011, removing their backward compatibility with SSL such that TLS sessions never negotiate the use of Secure Sockets Layer (SSL) version 2.0. As of April 2025 there is no formal date for TLS 1.2 to be deprecated. The specifications for TLS 1.2 became redefined as well by the Standards Track Document RFC 8446 to keep it as secure as possible; it is to be seen as a failover protocol now, meant only to be negotiated with clients which are unable to use TLS 1.3 (The original RFC 5246 definition for TLS 1.2 is since then obsolete). TLS 1.3 was defined in RFC 8446 in August 2018. It is based on the earlier TLS 1.2 specification. Major differences from TLS 1.2 include: Network Security Services (NSS), the cryptography library developed by Mozilla and used by its web browser Firefox, enabled TLS 1.3 by default in February 2017. TLS 1.3 support was subsequently added — but due to compatibility issues for a small number of users, not automatically enabled — to Firefox 52.0, which was released in March 2017. TLS 1.3 was enabled by default in May 2018 with the release of Firefox 60.0. Google Chrome set TLS 1.3 as the default version for a short time in 2017. It then removed it as the default, due to incompatible middleboxes such as Blue Coat web proxies. The intolerance of the new version of TLS was protocol ossification; middleboxes had ossified the protocol's version parameter. As a result, version 1.3 mimics the wire image of version 1.2. This change occurred very late in the design process, only having been discovered during browser deployment. The discovery of this intolerance also led to the prior version negotiation strategy, where the highest matching version was picked, being abandoned due to unworkable levels of ossification. 'Greasing' an extension point, where one protocol participant claims support for non-existent extensions to ensure that unrecognised-but-actually-existent extensions are tolerated and so to resist ossification, was originally designed for TLS, but it has since been adopted elsewhere. During the IETF 100 Hackathon, which took place in Singapore in 2017, the TLS Group worked on adapting open-source applications to use TLS 1.3. The TLS group was made up of individuals from Japan, United Kingdom, and Mauritius via the cyberstorm.mu team. This work was continued in the IETF 101 Hackathon in London, and the IETF 102 Hackathon in Montreal. wolfSSL enabled the use of TLS 1.3 as of version 3.11.1, released in May 2017. As the first commercial TLS 1.3 implementation, wolfSSL 3.11.1 supported Draft 18 and now supports Draft 28, the final version, as well as many older versions. A series of blogs were published on the performance difference between TLS 1.2 and 1.3. In September 2018, the popular OpenSSL project released version 1.1.1 of its library, in which support for TLS 1.3 was "the headline new feature". Support for TLS 1.3 was added to Secure Channel (schannel) for the GA releases of Windows 11 and Windows Server 2022. The Electronic Frontier Foundation praised TLS 1.3 and expressed concern about the variant protocol Enterprise Transport Security (ETS) that intentionally disables important security measures in TLS 1.3. Originally called Enterprise TLS (eTLS), ETS is a published standard known as the 'ETSI TS103523-3', "Middlebox Security Protocol, Part3: Enterprise Transport Security". It is intended for use entirely within proprietary networks such as banking systems. ETS does not support forward secrecy so as to allow third-party organizations connected to the proprietary networks to be able to use their private key to monitor network traffic for the detection of malware and to make it easier to conduct audits. Despite the claimed benefits, the EFF warned that the loss of forward secrecy could make it easier for data to be exposed along with saying that there are better ways to analyze traffic. Digital certificates A digital certificate certifies the ownership of a public key by the named subject of the certificate, and indicates certain expected usages of that key. This allows others (relying parties) to rely upon signatures or on assertions made by the private key that corresponds to the certified public key. Keystores and trust stores can be in various formats, such as .pem, .crt, .pfx, and .jks. TLS typically relies on a set of trusted third-party certificate authorities to establish the authenticity of certificates. Trust is usually anchored in a list of certificates distributed with user agent software, and can be modified by the relying party. According to Netcraft, who monitors active TLS certificates, the market-leading certificate authority (CA) has been Symantec since the beginning of their survey (or VeriSign before the authentication services business unit was purchased by Symantec). As of 2015, Symantec accounted for just under a third of all certificates and 44% of the valid certificates used by the 1 million busiest websites, as counted by Netcraft. In 2017, Symantec sold its TLS/SSL business to DigiCert. In an updated report, it was shown that IdenTrust, DigiCert, and Sectigo are the top 3 certificate authorities in terms of market share since May 2019. As a consequence of choosing X.509 certificates, certificate authorities and a public key infrastructure are necessary to verify the relation between a certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more convenient than verifying the identities via a web of trust, the 2013 mass surveillance disclosures made it more widely known that certificate authorities are a weak point from a security standpoint, allowing man-in-the-middle attacks (MITM) if the certificate authority cooperates (or is compromised). On April 11, 2025, the CA/Browser Forum approved a ballot that will require all public TLS certificate lifespans to gradually reduce to 47 days by 2029. The ballot was proposed by Apple. Algorithms Before a client and server can begin to exchange information protected by TLS, they must securely exchange or agree upon an encryption key and a cipher to use when encrypting data (see § Cipher). Among the methods used for key exchange/agreement are: public and private keys generated with RSA (denoted TLS_RSA in the TLS handshake protocol), Diffie–Hellman (TLS_DH), ephemeral Diffie–Hellman (TLS_DHE), elliptic-curve Diffie–Hellman (TLS_ECDH), ephemeral elliptic-curve Diffie–Hellman (TLS_ECDHE), anonymous Diffie–Hellman (TLS_DH_anon), pre-shared key (TLS_PSK) and Secure Remote Password (TLS_SRP). The TLS_DH_anon and TLS_ECDH_anon key agreement methods do not authenticate the server or the user and hence are rarely used because those are vulnerable to man-in-the-middle attacks. Only TLS_DHE and TLS_ECDHE provide forward secrecy. Public key certificates used during exchange/agreement also vary in the size of the public/private encryption keys used during the exchange and hence the robustness of the security provided. In July 2013, Google announced that it would no longer use 1024-bit public keys and would switch instead to 2048-bit keys to increase the security of the TLS encryption it provides to its users because the encryption strength is directly related to the key size. Notes A message authentication code (MAC) is used for data integrity. HMAC is used for CBC mode of block ciphers. Authenticated encryption (AEAD) such as GCM and CCM mode uses AEAD-integrated MAC and does not use HMAC.: §8.4 HMAC-based PRF, or HKDF is used for TLS handshake. Applications and adoption In applications design, TLS is usually implemented on top of Transport Layer protocols, encrypting all of the protocol-related data of protocols such as HTTP, FTP, SMTP, NNTP and XMPP. Historically, TLS has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). However, it has also been implemented with datagram-oriented transport protocols, such as the User Datagram Protocol (UDP) and the Datagram Congestion Control Protocol (DCCP), usage of which has been standardized independently using the term Datagram Transport Layer Security (DTLS). A primary use of TLS is to secure World Wide Web traffic between a website and a web browser encoded with the HTTP protocol. This use of TLS to secure HTTP traffic constitutes the HTTPS protocol. Notes As of March 2025[update], the latest versions of all major web browsers support TLS 1.2 and 1.3 and have them enabled by default, with the exception of IE 11. TLS 1.0 and 1.1 are disabled by default on the latest versions of all major browsers. Mitigations against known attacks are not enough yet: Most SSL and TLS programming libraries are free and open-source software. A paper presented at the 2012 ACM conference on computer and communications security showed that many applications used some of these SSL libraries incorrectly, leading to vulnerabilities. According to the authors: "The root cause of most of these vulnerabilities is the terrible design of the APIs to the underlying SSL libraries. Instead of expressing high-level security properties of network tunnels such as confidentiality and authentication, these APIs expose low-level details of the SSL protocol to application developers. As a consequence, developers often use SSL APIs incorrectly, misinterpreting and misunderstanding their manifold parameters, options, side effects, and return values." The Simple Mail Transfer Protocol (SMTP) can also be protected by TLS. These applications use public key certificates to verify the identity of endpoints. TLS can also be used for tunneling an entire network stack to create a VPN, which is the case with OpenVPN and OpenConnect. Many vendors have by now married TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of Web-browsers, in order to enable support for client/server applications. Compared to traditional IPsec VPN technologies, TLS has some inherent advantages in firewall and NAT traversal that make it easier to administer for large remote-access populations. TLS is also a standard method for protecting Session Initiation Protocol (SIP) application signaling. TLS can be used for providing authentication and encryption of the SIP signaling associated with VoIP and other SIP-based applications. Security Significant attacks against TLS/SSL are listed below. In February 2015, IETF issued an informational RFC summarizing the various known attacks against TLS/SSL. A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. For example, it allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server. The attacker cannot actually decrypt the client–server communication, so it is different from a typical man-in-the-middle attack. A short-term fix is for web servers to stop allowing renegotiation, which typically will not require other changes unless client certificate authentication is used. To fix the vulnerability, a renegotiation indication extension was proposed for TLS. It will require the client and server to include and verify information about previous handshakes in any renegotiation handshakes. This extension has been implemented by several libraries. A protocol downgrade attack (also called a version rollback attack) tricks a web server into negotiating connections with previous versions of TLS (such as SSLv2) that have long since been abandoned as insecure. Previous modifications to the original protocols, like False Start (adopted and enabled by Google Chrome) or Snap Start, reportedly introduced limited TLS protocol downgrade attacks or allowed modifications to the cipher suite list sent by the client to the server. In doing so, an attacker might succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite negotiated to use either a weaker symmetric encryption algorithm or a weaker key exchange. A paper presented at an ACM conference on computer and communications security in 2012 demonstrated that the False Start extension was at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data. Encryption downgrade attacks can force servers and clients to negotiate a connection using cryptographically weak keys. In 2014, a man-in-the-middle attack called FREAK was discovered affecting the OpenSSL stack, the default Android web browser, and some Safari browsers. The attack involved tricking servers into negotiating a TLS connection using cryptographically weak 512 bit encryption keys. Logjam is a security exploit discovered in May 2015 that exploits the option of using legacy "export-grade" 512-bit Diffie–Hellman groups dating back to the 1990s. It forces susceptible servers to downgrade to cryptographically weak 512-bit Diffie–Hellman groups. An attacker can then deduce the keys the client and server determine using the Diffie–Hellman key exchange. The DROWN attack is an exploit that attacks servers supporting contemporary SSL/TLS protocol suites by exploiting their support for the obsolete, insecure, SSLv2 protocol to leverage an attack on connections using up-to-date protocols that would otherwise be secure. DROWN exploits a vulnerability in the protocols used and the configuration of the server, rather than any specific implementation error. Full details of DROWN were announced in March 2016, together with a patch for the exploit. At that time, more than 81,000 of the top 1 million most popular websites were among the TLS protected websites that were vulnerable to the DROWN attack. On September 23, 2011, researchers Thai Duong and Juliano Rizzo demonstrated a proof of concept called BEAST (Browser Exploit Against SSL/TLS) using a Java applet to violate same origin policy constraints, for a long-known cipher block chaining (CBC) vulnerability in TLS 1.0: an attacker observing 2 consecutive ciphertext blocks C0, C1 can test if the plaintext block P1 is equal to x by choosing the next plaintext block P2 = x ⊕ C0 ⊕ C1; as per CBC operation, C2 = E(C1 ⊕ P2) = E(C1 ⊕ x ⊕ C0 ⊕ C1) = E(C0 ⊕ x), which will be equal to C1 if x = P1. Practical exploits had not been previously demonstrated for this vulnerability, which was originally discovered by Phillip Rogaway in 2002. The vulnerability of the attack had been fixed with TLS 1.1 in 2006, but TLS 1.1 had not seen wide adoption prior to this attack demonstration. RC4 as a stream cipher is immune to BEAST attack. Therefore, RC4 was widely used as a way to mitigate BEAST attack on the server side. However, in 2013, researchers found more weaknesses in RC4. Thereafter enabling RC4 on server side was no longer recommended. Chrome and Firefox themselves are not vulnerable to BEAST attack, however, Mozilla updated their NSS libraries to mitigate BEAST-like attacks. NSS is used by Mozilla Firefox and Google Chrome to implement SSL. Some web servers that have a broken implementation of the SSL specification may stop working as a result. Microsoft released Security Bulletin MS12-006 on January 10, 2012, which fixed the BEAST vulnerability by changing the way that the Windows Secure Channel (Schannel) component transmits encrypted network packets from the server end. Users of Internet Explorer (prior to version 11) that run on older versions of Windows (Windows 7, Windows 8 and Windows Server 2008 R2) can restrict use of TLS to 1.1 or higher. Apple fixed BEAST vulnerability by implementing 1/n-1 split and turning it on by default in OS X Mavericks, released on October 22, 2013. The authors of the BEAST attack are also the creators of the later CRIME attack, which can allow an attacker to recover the content of web cookies when data compression is used along with TLS. When used to recover the content of secret authentication cookies, it allows an attacker to perform session hijacking on an authenticated web session. While the CRIME attack was presented as a general attack that could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such as SPDY or HTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined. In 2013 a new instance of the CRIME attack against HTTP compression, dubbed BREACH, was announced. Based on the CRIME attack a BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link or is able to inject content into valid pages the user is visiting (ex: a wireless network under the control of the attacker). All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used. Unlike previous instances of CRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users. This is a known limitation of TLS as it is susceptible to chosen-plaintext attack against the application-layer data it was meant to protect. Earlier TLS versions were vulnerable against the padding oracle attack discovered in 2002. A novel variant, called the Lucky Thirteen attack, was published in 2013. Some experts also recommended avoiding triple DES CBC. Since the last supported ciphers developed to support any program using Windows XP's SSL/TLS library like Internet Explorer on Windows XP are RC4 and Triple-DES, and since RC4 is now deprecated (see discussion of RC4 attacks), this makes it difficult to support any version of SSL for any program using this library on XP. A fix was released in 2014 as the Encrypt-then-MAC extension to the TLS specification. The Lucky Thirteen attack can be mitigated in TLS 1.2 by using only AES_GCM ciphers; AES_CBC remains vulnerable. SSL may safeguard email, VoIP, and other types of communications over insecure networks in addition to its primary use case of secure data transmission between a client and the server. On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes CBC mode of operation with SSL 3.0 vulnerable to a padding attack (CVE-2014-3566). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption). On average, attackers only need to make 256 SSL 3.0 requests to reveal one byte of encrypted messages. Although this vulnerability only exists in SSL 3.0 and most clients and servers support TLS 1.0 and above, all major browsers voluntarily downgrade to SSL 3.0 if the handshakes with newer versions of TLS fail unless they provide the option for a user or administrator to disable SSL 3.0 and the user or administrator does so[citation needed]. Therefore, the man-in-the-middle can first conduct a version rollback attack and then exploit this vulnerability. On December 8, 2014, a variant of POODLE was announced that impacts TLS implementations that do not properly enforce padding byte requirements. Despite the existence of attacks on RC4 that broke its security, cipher suites in SSL and TLS that were based on RC4 were still considered secure prior to 2013 based on the way in which they were used in SSL and TLS. In 2011, the RC4 suite was actually recommended as a workaround for the BEAST attack. New forms of attack disclosed in March 2013 conclusively demonstrated the feasibility of breaking RC4 in TLS, suggesting it was not a good workaround for BEAST. An attack scenario was proposed by AlFardan, Bernstein, Paterson, Poettering and Schuldt that used newly discovered statistical biases in the RC4 key table to recover parts of the plaintext with a large number of TLS encryptions. An attack on RC4 in TLS and SSL that requires 13 × 220 encryptions to break RC4 was unveiled on 8 July 2013 and later described as "feasible" in the accompanying presentation at a USENIX Security Symposium in August 2013. In July 2015, subsequent improvements in the attack make it increasingly practical to defeat the security of RC4-encrypted TLS. As many modern browsers have been designed to defeat BEAST attacks (except Safari for Mac OS X 10.7 or earlier, for iOS 6 or earlier, and for Windows; see § Web browsers), RC4 is no longer a good choice for TLS 1.0. The CBC ciphers which were affected by the BEAST attack in the past have become a more popular choice for protection. Mozilla and Microsoft recommend disabling RC4 where possible. In February 2015, the use of RC4 cipher suites was officially prohibited in all versions of TLS. On September 1, 2015, Microsoft, Google, and Mozilla announced that RC4 cipher suites would be disabled by default in their browsers (Microsoft Edge [Legacy], Internet Explorer 11 on Windows 7/8.1/10, Firefox, and Chrome) in early 2016. A TLS (logout) truncation attack blocks a victim's account logout requests so that the user unknowingly remains logged into a web service. When the request to sign out is sent, the attacker injects an unencrypted TCP FIN message (no more data from sender) to close the connection. The server therefore does not receive the logout request and is unaware of the abnormal termination. Published in July 2013, the attack causes web services such as Gmail and Hotmail to display a page that informs the user that they have successfully signed-out, while ensuring that the user's browser maintains authorization with the service, allowing an attacker with subsequent access to the browser to access and take over control of the user's logged-in account. The attack does not rely on installing malware on the victim's computer; attackers need only place themselves between the victim and the web server (e.g., by setting up a rogue wireless hotspot). This vulnerability also requires access to the victim's computer. Another possibility is when using FTP the data connection can have a false FIN in the data stream, and if the protocol rules for exchanging close_notify alerts is not adhered to a file can be truncated. In February 2013 two researchers from Royal Holloway, University of London discovered a timing attack which allowed them to recover (parts of the) plaintext from a DTLS connection using the OpenSSL or GnuTLS implementation of DTLS when Cipher Block Chaining mode encryption was used. This attack, discovered in mid-2016, exploits weaknesses in the Web Proxy Autodiscovery Protocol (WPAD) to expose the URL that a web user is attempting to reach via a TLS-enabled web link. Disclosure of a URL can violate a user's privacy, not only because of the website accessed, but also because URLs are sometimes used to authenticate users. Document sharing services, such as those offered by Google and Dropbox, also work by sending a user a security token that is included in the URL. An attacker who obtains such URLs may be able to gain full access to a victim's account or data. The exploit works against almost all browsers and operating systems. The Sweet32 attack breaks all 64-bit block ciphers used in CBC mode as used in TLS by exploiting a birthday attack and either a man-in-the-middle attack or injection of a malicious JavaScript into a web page. The purpose of the man-in-the-middle attack or the JavaScript injection is to allow the attacker to capture enough traffic to mount a birthday attack. The Heartbleed bug is a serious vulnerability specific to the implementation of SSL/TLS in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness, reported in April 2014, allows attackers to steal private keys from servers that should normally be protected. The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret private keys associated with the public certificates used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users. The vulnerability is caused by a buffer over-read bug in the OpenSSL software, rather than a defect in the SSL or TLS protocol specification. In September 2014, a variant of Daniel Bleichenbacher's PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a man-in-the-middle attack by forging a public key signature. In February 2015, after media reported the hidden pre-installation of superfish adware on some Lenovo notebooks, a researcher found a trusted root certificate on affected Lenovo machines to be insecure, as the keys could easily be accessed using the company name, Komodia, as a passphrase. The Komodia library was designed to intercept client-side TLS/SSL traffic for parental control and surveillance, but it was also used in numerous adware programs, including Superfish, that were often surreptitiously installed unbeknownst to the computer user. In turn, these potentially unwanted programs installed the corrupt root certificate, allowing attackers to completely control web traffic and confirm false websites as authentic. In May 2016, it was reported that dozens of Danish HTTPS-protected websites belonging to Visa Inc. were vulnerable to attacks allowing hackers to inject malicious code and forged content into the browsers of visitors. The attacks worked because the TLS implementation used on the affected servers incorrectly reused random numbers (nonces) that are intended to be used only once, ensuring that each TLS handshake is unique. In February 2017, an implementation error caused by a single mistyped character in code used to parse HTML created a buffer overflow error on Cloudflare servers. Similar in its effects to the Heartbleed bug discovered in 2014, this overflow error, widely known as Cloudbleed, allowed unauthorized third parties to read data in the memory of programs running on the servers—data that should otherwise have been protected by TLS. As of July 2021[update], the Trustworthy Internet Movement estimated the ratio of websites that are vulnerable to TLS attacks. Forward secrecy is a property of cryptographic systems which ensures that a session key derived from a set of public and private keys will not be compromised if one of the private keys is compromised in the future. Without forward secrecy, if the server's private key is compromised, not only will all future TLS-encrypted sessions using that server certificate be compromised, but also any past sessions that used it as well (provided that these past sessions were intercepted and stored at the time of transmission). An implementation of TLS can provide forward secrecy by requiring the use of ephemeral Diffie–Hellman key exchange to establish session keys, and some notable TLS implementations do so exclusively: e.g., Gmail and other Google HTTPS services that use OpenSSL. However, many clients and servers supporting TLS (including browsers and web servers) are not configured to implement such restrictions. In practice, unless a web service uses Diffie–Hellman key exchange to implement forward secrecy, all of the encrypted web traffic to and from that service can be decrypted by a third party if it obtains the server's master (private) key; e.g., by means of a court order. Even where Diffie–Hellman key exchange is implemented, server-side session management mechanisms can impact forward secrecy. The use of TLS session tickets (a TLS extension) causes the session to be protected by AES128-CBC-SHA256 regardless of any other negotiated TLS parameters, including forward secrecy ciphersuites, and the long-lived TLS session ticket keys defeat the attempt to implement forward secrecy. Stanford University research in 2014 also found that of 473,802 TLS servers surveyed, 82.9% of the servers deploying ephemeral Diffie–Hellman (DHE) key exchange to support forward secrecy were using weak Diffie–Hellman parameters. These weak parameter choices could potentially compromise the effectiveness of the forward secrecy that the servers sought to provide. Since late 2011, Google has provided forward secrecy with TLS by default to users of its Gmail service, along with Google Docs and encrypted search, among other services. Since November 2013, Twitter has provided forward secrecy with TLS to users of its service. As of August 2019[update], about 80% of TLS-enabled websites are configured to use cipher suites that provide forward secrecy to most web browsers. TLS interception (or HTTPS interception if applied particularly to that protocol) is the practice of intercepting an encrypted data stream in order to decrypt it, read and possibly manipulate it, and then re-encrypt it and send the data on its way again. This is done by way of a "transparent proxy": the interception software terminates the incoming TLS connection, inspects the HTTP plaintext, and then creates a new TLS connection to the destination. TLS/HTTPS interception is used as an information security measure by network operators in order to be able to scan for and protect against the intrusion of malicious content into the network, such as computer viruses and other malware. Such content could otherwise not be detected as long as it is protected by encryption, which is increasingly the case as a result of the routine use of HTTPS and other secure protocols. A significant drawback of TLS/HTTPS interception is that it introduces new security risks of its own. One notable limitation is that it provides a point where network traffic is available unencrypted thus giving attackers an incentive to attack this point in particular in order to gain access to otherwise secure content. The interception also allows the network operator, or persons who gain access to its interception system, to perform man-in-the-middle attacks against network users. A 2017 study found that "HTTPS interception has become startlingly widespread, and that interception products as a class have a dramatically negative impact on connection security". Protocol details The TLS protocol exchanges records, which encapsulate the data to be exchanged in a specific format (see below). Each record can be compressed, padded, appended with a message authentication code (MAC), or encrypted, all depending on the state of the connection. Each record has a content type field that designates the type of data encapsulated, a length field and a TLS version field. The data encapsulated may be control or procedural messages of the TLS itself, or simply the application data needed to be transferred by TLS. The specifications (cipher suite, keys etc.) required to exchange application data by TLS, are agreed upon in the "TLS handshake" between the client requesting the data and the server responding to requests. The protocol therefore defines both the structure of payloads transferred in TLS and the procedure to establish and monitor the transfer. When the connection starts, the record encapsulates a "control" protocol – the handshake messaging protocol (content type 22). This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the format of messages and the order of their exchange. These may vary according to the demands of the client and server – i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message (as specified below). A typical connection example follows, illustrating a handshake where the server (but not the client) is authenticated by its certificate: The following full example shows a client being authenticated (in addition to the server as in the example above; see mutual authentication) via TLS using certificates exchanged between both peers. Public key operations (e.g., RSA) are relatively expensive in terms of computational power. TLS provides a secure shortcut in the handshake mechanism to avoid these operations: resumed sessions. Resumed sessions are implemented using session IDs or session tickets. Apart from the performance benefit, resumed sessions can also be used for single sign-on, as it guarantees that both the original session and any resumed session originate from the same client. This is of particular importance for the FTP over TLS/SSL protocol, which would otherwise suffer from a man-in-the-middle attack in which an attacker could intercept the contents of the secondary data connections. The TLS 1.3 handshake was condensed to only one round trip compared to the two round trips required in previous versions of TLS/SSL. To start the handshake, the client guesses which key exchange algorithm will be selected by the server and sends a ClientHello message to the server containing a list of supported ciphers (in order of the client's preference) and public keys for some or all of its key exchange guesses. If the client successfully guesses the key exchange algorithm, 1 round trip is eliminated from the handshake. After receiving the ClientHello, the server selects a cipher and sends back a ServerHello with its own public key, followed by server Certificate and Finished messages. After the client receives the server's finished message, it now is coordinated with the server on which cipher suite to use. In an ordinary full handshake, the server sends a session id as part of the ServerHello message. The client associates this session id with the server's IP address and TCP port, so that when the client connects again to that server, it can use the session id to shortcut the handshake. In the server, the session id maps to the cryptographic parameters previously negotiated, specifically the "master secret". Both sides must have the same "master secret" or the resumed handshake will fail (this prevents an eavesdropper from using a session id). The random data in the ClientHello and ServerHello messages virtually guarantee that the generated connection keys will be different from in the previous connection. In the RFCs, this type of handshake is called an abbreviated handshake. It is also described in the literature as a restart handshake. Instead of session IDs, TLS can also be extended via use of session tickets. It defines a way to resume a TLS session without requiring that session-specific state is stored at the TLS server. When using session tickets, the TLS server stores its session-specific state in a session ticket and sends the session ticket to the TLS client for storing. The client resumes a TLS session by sending the session ticket to the server, and the server resumes the TLS session according to the session-specific state in the ticket. The session ticket is encrypted and authenticated by the server, and the server verifies its validity before using its contents. One particular weakness of this method with OpenSSL is that it always limits encryption and authentication security of the transmitted TLS session ticket to AES128-CBC-SHA256, no matter what other TLS parameters were negotiated for the actual TLS session. This means that the state information (the TLS session ticket) is not as well protected as the TLS session itself. Of particular concern is OpenSSL's storage of the keys in an application-wide context (SSL_CTX), i.e. for the life of the application, and not allowing for re-keying of the AES128-CBC-SHA256 TLS session tickets without resetting the application-wide OpenSSL context (which is uncommon, error-prone and often requires manual administrative intervention). This is the general format of all TLS records. No MAC or Padding fields can be present at end of TLS records before all cipher algorithms and parameters have been negotiated and handshaked and then confirmed by sending a CipherStateChange record (see below) for signalling that these parameters will take effect in all further records sent by the same peer. Most messages exchanged during the setup of the TLS session are based on this record, unless an error or warning occurs and needs to be signaled by an Alert protocol record (see below), or the encryption mode of the session is modified by another record (see ChangeCipherSpec protocol below). Note that multiple handshake messages may be combined within one record. This record should normally not be sent during normal handshaking or application exchanges. However, this message can be sent at any time during the handshake and up to the closure of the session. If this is used to signal a fatal error, the session will be closed immediately after sending this record, so this record is used to give a reason for this closure. If the alert level is flagged as a warning, the remote can decide to close the session if it decides that the session is not reliable enough for its needs (before doing so, the remote may also send its own signal). Support for name-based virtual servers From the application protocol point of view, TLS belongs to a lower layer, although the TCP/IP model is too coarse to show it. This means that the TLS handshake is usually (except in the STARTTLS case) performed before the application protocol can start. In the name-based virtual server feature being provided by the application layer, all co-hosted virtual servers share the same certificate because the server has to select and send a certificate immediately after the ClientHello message. This is a big problem in hosting environments because it means either sharing the same certificate among all customers or using a different IP address for each of them. There are two known workarounds provided by X.509: To provide the server name, Transport Layer Security (TLS) Extensions allow clients to include a Server Name Indication extension (SNI) in the extended ClientHello message.: §3 This extension hints to the server immediately which name the client wishes to connect to, so the server can select the appropriate certificate to send to the clients. There is also a method to implement name-based virtual hosting by upgrading HTTP to TLS via an HTTP/1.1 Upgrade header. Normally this is to securely implement HTTP over TLS within the main "http" URI scheme instead of the commonly used "https" scheme. This would avoid forking the URI space and reduces the number of used ports, however, few implementations currently support this.[citation needed] See also Further reading External links IETF Request for comments The current approved version of (D)TLS is version 1.3, which is specified in: The current standards replaces these former versions: Other RFCs subsequently extended (D)TLS. Extensions to (D)TLS 1.3 include: Extensions to (D)TLS 1.2 include: Extensions to (D)TLS 1.1 include: Extensions to TLS 1.0 include: References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XMPP] | [TOKENS: 3728] |
Contents XMPP Extensible Messaging and Presence Protocol (abbreviation XMPP, originally named Jabber) is an open communication protocol designed for instant messaging (IM), presence information, and contact list maintenance. Based on XML (Extensible Markup Language), it enables the near-real-time exchange of structured data between two or more network entities. Designed to be extensible, the protocol offers a multitude of applications beyond traditional IM in the broader realm of message-oriented middleware, including signalling for VoIP, video, file transfer, gaming and other uses. Unlike most commercial instant messaging protocols, XMPP is defined in an open standard in the application layer. The architecture of the XMPP network is similar to email; anyone can run their own XMPP server and there is no central master server. This federated open system approach allows users to interoperate with others on any server using a Jabber identifier (JID) user account, similar to an email address. XMPP implementations can be developed using any software license and many server, client, and library implementations are distributed as free and open-source software. Numerous freeware and commercial software implementations also exist. Originally developed by the open-source community, the protocols were formalized as an approved instant messaging standard in 2004 and have been continuously developed with new extensions and features. Various XMPP client software are available on both desktop and mobile platforms and devices – by 2003 the protocol was used by over ten million people worldwide on the network, according to the XMPP Standards Foundation.[needs update] Federated Instant Messaging Although the protocol has other uses, the primary application is Federated Instant Messaging, to deliver a standard Instant Messaging and Presence Protocol, outlined below. A client Alice ("alice@example.com") has a message for some other user, Beth ("beth@example.com"), and uses XMPP to convey this to the example.com server. If Beth is online, the server delivers the message instantly, otherwise it will be held for delivery later. If Beth is offline, this status is visible to Alice. If the message is for a user on another server, Charles ("charles@example.net"), then the example.com server connects using XMPP to pass the message to the example.net server. The message is then similarly delivered or held, and Alice is informed of the status. Following the initial message delivery, the end clients are in a "chat" and each party is subsequently informed of changes to the other's status. The XMPP client communicates with the server over an TLS-encrypted TCP stream on port 5222. XMPP servers communicate with each other over an TLS-encrypted TCP stream on port 5269. Protocol characteristics The XMPP network architecture is reminiscent of the Simple Mail Transfer Protocol (SMTP), a client–server model; clients do not talk directly to one another as it is decentralized – anyone can run a server. By design, there is no central authoritative server as there is with messaging services such as AIM, WLM, WhatsApp or Telegram. Some confusion often arises on this point as there is a public XMPP server being run at jabber.org, to which many users subscribe. However, anyone may run their own XMPP server on their own domain. Every user on the network has a unique XMPP address, called Jabber ID. The JID is structured like an email address with a username and a domain name (or IP address) for the server where that user resides, separated by an at sign (@) – for example, “alice@example.com“: here alice is the username and example.com the server with which the user is registered. Since a user may wish to log in from multiple locations, they may specify a resource. A resource identifies a particular client belonging to the user (for example home, work, or mobile). This may be included in the JID by appending a slash followed by the name of the resource. For example, the full JID of a user's mobile account could be username@example.com/mobile. Each resource may have specified a numerical value called priority. Messages simply sent to username@example.com will go to the client with highest priority, but those sent to username@example.com/mobile will go only to the mobile client. The highest priority is the one with largest numerical value. JIDs without a username part are also valid, and may be used for system messages and control of special features on the server. A resource remains optional for these JIDs as well. The means to route messages based on a logical endpoint identifier – the JID, instead of by an explicit IP address, present opportunities to use XMPP as an Overlay network implementation on top of different underlying networks. The original and "native" transport protocol for XMPP is Transmission Control Protocol (TCP), using open-ended XML streams over long-lived TCP connections. As an alternative to the TCP transport, the XMPP community has also developed an HTTP transport for web clients as well as users behind restricted firewalls. In the original specification, XMPP could use HTTP in two ways: polling and binding. The polling method, now deprecated, essentially implies messages stored on a server-side database are being fetched (and posted) regularly by an XMPP client by way of HTTP 'GET' and 'POST' requests. The binding method, implemented using Bidirectional-streams Over Synchronous HTTP (BOSH), allows servers to push messages to clients as soon as they are sent. This push model of notification is more efficient than polling, where many of the polls return no new data. Because the client uses HTTP, most firewalls allow clients to fetch and post messages without any hindrances. Thus, in scenarios where the TCP port used by XMPP is blocked, a server can listen on the normal HTTP port and the traffic should pass without problems. Various websites let people sign into XMPP via a browser. Furthermore, there are open public servers that listen on standard http (port 80) and https (port 443) ports, and hence allow connections from behind most firewalls. However, the IANA-registered port for BOSH is actually 5280, not 80. The XMPP Standards Foundation or XSF (formerly the Jabber Software Foundation) is active in developing open XMPP extensions, so called XEPs. However, extensions can also be defined by any individual, software project, or organization. To maintain interoperability, common extensions are managed by the XSF. XMPP applications beyond IM include: chat rooms, network management, content syndication, collaboration tools, file sharing, gaming, remote systems control and monitoring, geolocation, middleware and cloud computing, VoIP, and identity services. Building on its capability to support discovery across local network domains, XMPP is well-suited for cloud computing where virtual machines, networks, and firewalls would otherwise present obstacles to alternative service discovery and presence-based solutions. Cloud computing and storage systems rely on various forms of communication over multiple levels, including not only messaging between systems to relay state but also the migration or distribution of larger objects, such as storage or virtual machines. Along with authentication and in-transit data protection, XMPP can be applied at a variety of levels and may prove ideal as an extensible middleware or Message-oriented middleware (MOM) protocol. Since XML is text based, normal XMPP has a higher network overhead compared to purely binary solutions. This issue was being addressed by the experimental XEP-0322 Efficient XML Interchange (EXI) Format, where XML is serialized in an efficient binary manner, especially in schema-informed mode. This XEP is currently deferred. In-band binary data transfer is limited. Binary data must be first base64 encoded before it can be transmitted in-band. Therefore, any significant amount of binary data (e.g., file transfers) is best transmitted out-of-band, using in-band messages to coordinate. In most cases this is dealt with by using an attachment to a message and the widely implemented XEP-0363 HTTP File Upload mechanism. Voice and Video chat can be done via the Jingle XMPP Extension Protocol, XEP-0166. Features Using the extension called Jingle, XMPP can provide an open means to support machine-to-machine or peer-to-peer communications across a diverse set of networks. This feature is mainly used for IP telephony (VoIP). XMPP supports conferences with multiple users, using the specification Multi-User Chat (MUC) (XEP-0045). From the point of view of a normal user, it is comparable to Internet Relay Chat (IRC). XMPP servers can be isolated (e.g., on a company intranet), and secure authentication (SASL) and point-to-point encryption (TLS) have been built into the core XMPP specifications. Off-the-Record Messaging (OTR) is an extension of XMPP enabling encryption of messages and data. It has since been replaced by a better extension, multi-end-to-multi-end encryption (OMEMO, XEP-0384) end-to-end encryption between users. This gives a higher level of security, by encrypting all data from the source client and decrypting again at the target client; the server operator cannot decrypt the data they are forwarding. Messages can also be encrypted with OpenPGP, for example with the software Gajim. While several service discovery protocols exist today (such as zeroconf or the Service Location Protocol), XMPP provides a solid base for the discovery of services residing locally or across a network, and the availability of these services (via presence information), as specified by XEP-0030 DISCO. One of the original design goals of the early Jabber open-source community was enabling users to connect to multiple instant messaging systems (especially non-XMPP systems) through a single client application. This was done through entities called transports or gateways to other instant messaging protocols like ICQ, AIM or Yahoo Messenger, but also to protocols such as SMS, IRC or email. Unlike multi-protocol clients, XMPP provides this access at the server level by communicating via special gateway services running alongside an XMPP server. Any user can "register" with one of these gateways by providing the information needed to log on to that network, and can then communicate with users of that network as though they were XMPP users. Thus, such gateways function as client proxies (the gateway authenticates on the user's behalf on the non-XMPP service). As a result, any client that fully supports XMPP can access any network with a gateway without extra code in the client, and without the need for the client to have direct access to the Internet. However, the client proxy model may violate terms of service on the protocol used (although such terms of service are not legally enforceable in several countries) and also requires the user to send their IM username and password to the third-party site that operates the transport (which may raise privacy and security concerns). Another type of gateway is a server-to-server gateway, which enables a non-XMPP server deployment to connect to native XMPP servers using the built in interdomain federation features of XMPP. Such server-to-server gateways are offered by several enterprise IM software products, including: Software XMPP is implemented by many clients, servers, and code libraries. These implementations are provided under a variety of software licenses. Numerous XMPP server software exist, some well known ones include ejabberd and Prosody. A large number of XMPP client software exist on various modern and legacy platforms, including both graphical and command line based clients. According to the XMPP website, some of the most popular software include Conversations, Cheogram, Monocles and Quicksy (Android), Dino (BSD, Windows, Unix, Linux), Converse.js (web browser, Linux, Windows, macOS), Gajim (Windows, Linux), Monal (macOS, iOS), and Swift.IM (macOS, Windows, Linux). Lately, Monal has been forked as a Quicksy release for iOS. Other clients include: Bombus, ChatSecure, Coccinella, Miranda NG, Pidgin, Psi, Tkabber, Trillian, and Xabber. Deployment and distribution There are thousands of XMPP servers worldwide, many public ones as well as private individuals or organizations running their own servers without commercial intent. Numerous websites show a list of public XMPP servers where users may register at (for example on the XMPP.net website). Several large public IM services natively use or used XMPP, including LiveJournal's "LJ Talk", Nimbuzz, and HipChat. Various hosting services, such as DreamHost, enable hosting customers to choose XMPP services alongside more traditional web and email services. Specialized XMPP hosting services also exist in form of cloud so that domain owners need not directly run their own XMPP servers, including Cisco Webex Connect, Chrome.pl, Flosoft.biz, i-pobox.net, and hosted.im. The majority of these services are Federated – so that users of one service can communicate with users of another service. XMPP is also used in deployments of non-IM services, including smart grid systems such as demand response applications, message-oriented middleware, and as a replacement for SMS to provide text messaging on many smartphone clients. Some of the largest messaging providers use, or have been using, various forms of XMPP based protocols in their backend systems without necessarily exposing this fact to their end users. One example is Google, which in August 2005 introduced Google Talk, a combination VoIP and IM system that uses XMPP for instant messaging and as a base for a voice and file transfer signaling protocol called Jingle. The initial launch did not include server-to-server communications; Google enabled that feature on January 17, 2006. Google later added video functionality to Google Talk, also using the Jingle protocol for signaling. In May 2013, Google announced XMPP compatibility would be dropped from Google Talk for server-to-server federation, although it would retain client-to-server support. Google Talk has since been dropped from Google's line of products. In January 2008, AOL introduced experimental XMPP support for its AOL Instant Messenger (AIM) service, allowing AIM users to communicate using XMPP. However, in March 2008, this service was discontinued.[citation needed] As of May 2011, AOL offers limited XMPP support. In February 2010, the social-networking site Facebook opened up its chat feature to third-party applications via XMPP. Some functionality was unavailable through XMPP, and support was dropped in April 2014. Similarly, in December 2011, Microsoft released an XMPP interface to its Microsoft Messenger service. Skype, its de facto successor, also provided limited XMPP support. Apache Wave is another example. XMPP is the de facto standard for private chat in gaming related platforms such as Origin, and PlayStation, as well as the now discontinued Xfire and Raptr. Two notable exceptions are Steam and Xbox LIVE; both use their own proprietary messaging protocols. History and development Jeremie Miller began working on the Jabber technology in 1998 and released the first version of the jabberd server on January 4, 1999. The early Jabber community focused on open-source software, mainly the jabberd server, but its major outcome proved to be the development of the XMPP protocol. The Internet Engineering Task Force (IETF) formed an XMPP working group in 2002 to formalize the core protocols as an IETF instant messaging and presence technology. The early Jabber protocol, as developed in 1999 and 2000, formed the basis for XMPP as published in RFC 3920 and RFC 3921 in October 2004 (the primary changes during formalization by the IETF's XMPP Working Group were the addition of TLS for channel encryption and SASL for authentication). The XMPP Working group also produced specifications RFC 3922 and RFC 3923. In 2011, RFC 3920 and RFC 3921 were superseded by RFC 6120 and RFC 6121 respectively, with RFC 6122 specifying the XMPP address format. In 2015, RFC 6122 was superseded by RFC 7622. In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly the Jabber Software Foundation) is active in developing open XMPP extensions. The first IM service based on XMPP was Jabber.org, which has operated continuously and offered free accounts since 1999. From 1999 until February 2006, the service used jabberd as its server software, at which time it migrated to ejabberd (both of which are free software application servers). In January 2010, the service migrated to the proprietary M-Link server software produced by Isode Ltd. In September 2008, Cisco Systems acquired Jabber, Inc., the creators of the commercial product Jabber XCP. The XMPP Standards Foundation (XSF) develops and publishes extensions to XMPP through a standards process centered on XMPP Extension Protocols (XEPs, previously known as Jabber Enhancement Proposals or JEPs). The following extensions are in especially wide use: XMPP features such as federation across domains, publish/subscribe, authentication and its security even for mobile endpoints are being used to implement the Internet of Things. Several XMPP extensions are part of the experimental implementation: Efficient XML Interchange (EXI) Format; Sensor Data; Provisioning; Control; Concentrators; Discovery. These efforts are documented on a page in the XMPP wiki dedicated to Internet of Things and the XMPP IoT mailing list. Specifications and standards The IETF XMPP working group has produced a series of Request for Comments (RFC) documents: The most important and most widely implemented of these specifications are: XMPP has often been regarded as a competitor to SIMPLE, based on Session Initiation Protocol (SIP), as the standard protocol for instant messaging and presence notification. The XMPP extension for multi-user chat can be seen as a competitor to IRC, although IRC is far simpler, has far fewer features, and is far more widely used.[citation needed] The XMPP extensions for publish–subscribe provide many of the same features as the Advanced Message Queuing Protocol (AMQP). See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Economy_(Israel)] | [TOKENS: 150] |
Contents Ministry of Economy (Israel) The Ministry of Economy (Hebrew: משרד הכלכלה, romanized: Misrad HaKalkala) is a ministry of the Israeli government that oversees commerce, industry and labor in Israel. History The ministry was established in 1948 as the Ministry of Commerce and Industry. In 1977 the Tourism Ministry post was added to it, becoming the Ministry of Industry, Trade, and Tourism. However, the merger was reversed in 1981 and the office was renamed Ministry of Industry and Trade. Labor, which had been merged with the Welfare Ministry in the 1970s, was appended to the portfolio in 2003. List of ministers References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/URL#cite_ref-FOOTNOTEBerners-Lee2015_14-1] | [TOKENS: 957] |
Contents URL A uniform resource locator (URL), colloquially known as web address, is a reference to a resource on the World Wide Web. A URL specifies the location of a resource on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably.[a] A URL is most commonly used to reference a web page (HTTP/HTTPS) but is also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications. Most web browsers display the URL of a web page above the page in an address bar. As an example of a web page URL, https://www.example.com/index.html indicates protocol https, hostname www.example.com, and file name index.html. History The Uniform Resource Locator was defined in RFC 1738 in 1994 by Tim Berners-Lee, the inventor of the World Wide Web, and the URI working group of the Internet Engineering Task Force (IETF), as an outcome of collaboration started at the IETF Living Documents birds of a feather session in 1992. The format combines the pre-existing system of domain names (created in 1985) with file path syntax, where slashes are used to separate directory and filenames. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//). Berners-Lee later expressed regret at the use of dots to separate the parts of the domain name within URIs, wishing he had used slashes throughout, and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary. Early WorldWideWeb collaborators, including Berners-Lee, originally proposed the use of UDIs: Universal Document Identifiers. An early (1993) draft of the HTML Specification referred to "Universal" Resource Locators. This was dropped some time between June 1994 and October 1994. In his book Weaving the Web, Berners-Lee emphasizes his preference for the original inclusion of "universal" in the expansion rather than the word "uniform", to which it was later changed, and he gives a brief account of the contention that led to the change. Syntax Every HTTP URL conforms to the syntax of a generic URI. The URI generic syntax consists of five components organized hierarchically in order of decreasing significance from left to right:: §3 A component is undefined if it has an associated delimiter and the delimiter does not appear in the URI; the scheme and path components are always defined.: §5.2.1 A component is empty if it has no characters; the scheme component is always non-empty.: §3 The authority component consists of subcomponents: This is represented in a syntax diagram as: The URI comprises: A web browser will usually dereference a URL by performing an HTTP request to the specified host, by default on port number 80. URLs using the https scheme require that requests and responses be made over a secure connection to the website. Internationalized URL Internet users are distributed throughout the world using a wide variety of languages and alphabets, and expect to be able to create URLs in their own local alphabets. An Internationalized Resource Identifier (IRI) is a form of URL that includes Unicode characters. All modern browsers support IRIs. The parts of the URL requiring special treatment for different alphabets are the domain name and path. The domain name in the IRI is known as an Internationalized Domain Name (IDN). Web and Internet software automatically convert the domain name into punycode usable by the Domain Name System; for example, the Chinese URL http://例子.卷筒纸 becomes http://xn--fsqu00a.xn--3lr804guic/. The xn-- indicates that the character was not originally ASCII. The URL path name can also be specified by the user in the local writing system. If not already encoded, it is converted to UTF-8, and any characters not part of the basic URL character set are escaped as hexadecimal using percent-encoding; for example, the Japanese URL http://example.com/引き割り.html becomes http://example.com/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html. The target computer decodes the address and displays the page. Protocol-relative URLs Protocol-relative links (PRL), also known as protocol-relative URLs (PRURL), are URLs that have no protocol specified. For example, //example.com will use the protocol of the current page, typically HTTP or HTTPS. See also Notes Citations References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Personification] | [TOKENS: 3625] |
Contents Personification Personification is the representation of any thing, being, or abstraction as a person or with person-like qualities. In the arts and as a literary device, personification is common for: places, especially cities, countries, and continents; elements of the natural world, such as trees, the seasons, the traditional "four elements", the four cardinal winds, and the five senses; and abstract concepts, such as death, the four cardinal virtues, the seven deadly sins, and creative expression (for instance, as personified by the nine Muses). In most religions, deities have a strong element of personification, with human-like emotional reactions, desires, and intellectual capabilities. Often, deities themselves in polytheistic religions are embodied personifications of abstract concepts, such as in ancient Greek religion and the related ancient Roman religion (with gods that are personifications of the sea, the sun, victory, death, etc.), in particular among the many minor deities. Many such deities, such as the tyches or tutelary deities for major cities, survived the arrival of Christianity, now as symbolic personifications stripped of religious significance. An exception was the winged goddess of victory, Victoria/Nike, who developed into the visualisation of the Christian angel. Generally, personifications lack much in the way of narrative myths, although classical myth at least gave many of them parents among the major Olympian deities. The iconography of several personifications "maintained a remarkable degree of continuity from late antiquity until the 18th century". Female personifications tend to outnumber male ones, at least until modern national personifications, many of which are male. Personifications are very common elements in allegory, and historians and theorists of personification complain that the two have been too often confused, or discussion of them dominated by allegory. Single images of personifications tend to be titled as an "allegory", arguably incorrectly. By the late 20th century personification seemed largely out of fashion, but the semi-personificatory superhero figures of many comic book series came in the 21st century to dominate popular cinema in a number of superhero film franchises. According to Ernst Gombrich, "we tend to take it for granted rather than to ask questions about this extraordinary predominantly feminine population which greets us from the porches of cathedrals, crowds around our public monuments, marks our coins and our banknotes, and turns up in our cartoons and our posters; these females variously attired, of course, came to life on the medieval stage, they greeted the Prince on his entry into a city, they were invoked in innumerable speeches, they quarreled or embraced in endless epics where they struggled for the soul of the hero or set the action going, and when the medieval versifier went out on one fine spring morning and lay down on a grassy bank, one of these ladies rarely failed to appear to him in his sleep and to explain her own nature to him in any number of lines". History Personification as an artistic device is easier to discuss when belief in the personification as an actual spiritual being has died down; this seems to have happened in the ancient Graeco-Roman world, probably even before Christianisation. In other cultures, especially Hinduism and Buddhism, many personification figures still retain their religious significance, which is why they are not covered here. For example, Bharat Mata was devised as a Hindu goddess figure to act as a national personification by intellectuals in the Indian independence movement from the 1870s, but now has some actual Hindu temples. Personification is found very widely in classical literature, art and drama, as well as the treatment of personifications as relatively minor deities, or the rather variable category of daemons. In classical Athens, every geographical division of the state for local government purposes had a personified deity which received some cultic attention, as well as Demos, a male personification for the governing assembly of free citizens, and Boule, a female one for the ruling council. These appear in art but are often hard to identify if not labelled. Personification in the Bible is mostly limited to passing phrases which can probably be regarded as literary flourishes, with the important and much-discussed exception of Wisdom in the Book of Proverbs, 1–9, where a female personification is treated at some length, and makes speeches. The Four Horsemen of the Apocalypse from the Book of Revelation can be regarded as personification figures, although the text does not specify what all personify. According to James J. Paxson in his book on the subject "all personification figures prior to the sixth century A.D. were ... female"; but major rivers have male personifications much earlier, and are more often male, which often extends to "Water" in the Four Elements. The predominance of females is at least partly because Latin grammar gives nouns for abstractions the female gender. Pairs of winged victories decorated the spandrels of Roman triumphal arches and similar spaces, and ancient Roman coinage was an especially rich source of images, many carrying their name, which was helpful for medieval and Renaissance antiquarians. Sets of tyches representing the major cities of the empire were used in the decorative arts. Most imaginable virtues and virtually every Roman province was personified on coins at some point, the provinces often initially seated dejected as "CAPTA" ("taken") after its conquest, and later standing, creating images such as Britannia that were often revived in the Renaissance or later. Lucian (2nd century AD) records a detailed description of a lost painting by Apelles (4th century BC) called the Calumny of Apelles, which some Renaissance painters followed, most famously Botticelli. This included eight personifications of virtues and vices: Hope, Repentance, Perfidy, Calumny, Fraud, Rancour, Ignorance, Suspicion, as well as two other figures. Platonism, which in some manifestations proposed systems involving numbers of spirits, was naturally conducive to personification and allegory, and is an influence on the uses of it from classical times through various revivals up to the Baroque period. According to Andrew Escobedo, "literary personification marshalls inanimate things, such as passions, abstract ideas, and rivers, and makes them perform actions in the landscape of the narrative." He dates "the rise and fall of its [personification's] literary popularity" to "roughly, between the fifth and seventeenth centuries". Late antique philosophical books that made heavy use of personification and were especially influential in the Middle Ages included the Psychomachia of Prudentius (early 5th century), with an elaborate plot centered around battles between the virtues and vices, and The Consolation of Philosophy (c. 524) by Boethius, which takes the form of a dialogue between the author and "Lady Philosophy". Fortuna and the Wheel of Fortune were prominent and memorable in this, which helped to make the latter a favourite medieval trope. Both authors were Christians, and the origins in the pagan classical religions of the standard range of personifications had been left well behind. A medieval creation was the Four Daughters of God, a shortened group of virtues consisting of: Truth, Righteousness or Justice, Mercy, and Peace. There were also the seven virtues, made up of the four classical cardinal virtues of prudence, justice, temperance and courage (or fortitude), these going back to Plato's Republic, with the three theological virtues of faith, hope and charity. The seven deadly sins were their counterparts. The major works of Middle English literature had many personification characters, and often formed what are called "personification allegories" where the whole work is an allegory, largely driven by personifications. These include Piers Plowman by William Langland (c. 1370–90), where most of the characters are clear personifications named as their qualities, and several works by Geoffrey Chaucer, such as The House of Fame (1379–80). However, Chaucer tends to take his personifications in the direction of being more complex characters and give them different names, as when he adapts part of the French Roman de la Rose (13th century). The English mystery plays and the later morality plays have many personifications as characters, alongside their biblical figures. Frau Minne, the spirit of courtly love in German medieval literature, had equivalents in other vernaculars. In Italian literature Petrach's Triomphi, finished in 1374, is based around a procession of personifications carried on "cars", as was becoming fashionable in courtly festivities; it was illustrated by many different artists. Dante has several personification characters, but prefers using real persons to represent most sins and virtues. In Elizabethan literature many of the characters in Edmund Spenser's enormous epic The Faerie Queene, though given different names, are effectively personifications, especially of virtues. The Pilgrim's Progress (1678) by John Bunyan was the last great personification allegory in English literature, from a strongly Protestant position (though see Thomson's Liberty below). A work like Shelley's The Triumph of Life, unfinished at his death in 1822, which to many earlier writers would have called for personifications to be included, avoids them, as does most Romantic literature, apart from that of William Blake. Leading critics had begun to complain about personification in the 18th century, and such "complaints only grow louder in the nineteenth century". According to Andrew Escobedo, there is now "an unstated scholarly consensus" that "personification is a kind of frozen or hollow version of literal characters", which "depletes the fiction". Personifications, often in sets, frequently appear in medieval art, often illustrating or following literary works. The virtues and vices were probably the most common, and the virtues appear in many large sculptural programmes, for example the exteriors of Chartres Cathedral and Amiens Cathedral. In painting, both virtues and vices are personified along the lowest zone of the walls of the Scrovegni Chapel by Giotto (c. 1305), and are the main figures in Ambrogio Lorenzetti's Allegory of Good and Bad Government (1338–39) in the Palazzo Pubblico of Siena. In the Allegory of Bad Government Tyranny is enthroned, with Avarice, Pride, and Vainglory above him. Beside him on the magistrate's bench sit Cruelty, Deceit, Fraud, Fury, Division, and War, while Justice lies tightly bound below. The so-called Mantegna Tarocchi (c. 1465–75) are sets of fifty educational cards depicting personifications of social classes, the planets and heavenly bodies, and also social classes. A new pair, once common on the portals of large churches, are Ecclesia and Synagoga. Death envisaged as a skeleton, often with a scythe and hour-glass, is a late medieval innovation, that became very common after the Black Death. However, it is rarely seen in funerary art "before the Counter-Reformation". When not illustrating literary texts, or following a classical model as Botticelli does, personifications in art tend to be relatively static, and found together in sets, whether of statues decorating buildings or paintings, prints or media such as porcelain figures. Sometimes one or more virtues take on and invariably conquer vices. Other paintings by Botticelli are exceptions to such simple compositions, in particular his Primavera and The Birth of Venus, in both of which several figures form complex allegories. An unusually powerful single personification figure is depicted in Melencolia I (1514) an engraving by Albrecht Dürer. Venus, Cupid, Folly and Time (c. 1545) by Agnolo Bronzino has five personifications, apart from Venus and Cupid. In all these cases, the meaning of the work remains uncertain, despite intensive academic discussion, and even the identity of the figures continues to be argued over. Theory Around 300 BC, Demetrius of Phalerum is the first writer on rhetoric to describe prosopopoeia, which was already a well-established device in rhetoric and literature, from Homer onwards. Quintilian's lengthy Institutio Oratoria gives a comprehensive account, and a taxonomy of common personifications; no more comprehensive account was written until after the Renaissance. The main Renaissance humanists to deal with the subject at length were Erasmus in his De copia and Petrus Mosellanus in Tabulae de schematibus et tropis, who were copied by other writers throughout the 16th century. From the late 16th century theoretical writers such as Karel van Mander in his Schilder-boeck (1604) began to treat personification in terms of the visual arts. At the same time the emblem book, describing and illustrating emblematic images that were largely personifications, became enormously popular, both with intellectuals and artists and craftsmen looking for motifs. The most famous of these was the Iconologia of Cesare Ripa, first published unillustrated in 1593, but from 1603 published in many different illustrated editions, using different artists. This set at least the identifying attributes carried by many personifications until the 19th century. From the 20th century into the 21st, the past use of personification has received greatly increased critical attention, just as the artistic practice of it has greatly declined. Among a number of key works, The Allegory of Love: A Study in Medieval Tradition (1936), by C. S. Lewis was an exploration of courtly love in medieval and Renaissance literature. Innovation The classical repertoire of virtues, seasons, cities and so forth supplied the majority of subjects until the 19th century, but some new personifications became required. The 16th century saw the new personification of the Americas and made the four continents an appealing new set, four figures being better suited to many contexts than three. The 18th-century discovery of Australia was not so quickly followed by an addition to the set, if only for reasons of geometry; Australia is not included in the continents at the corners of the Albert Memorial (1860s). This does have a set of three-figure groups representing agriculture, commerce, engineering and manufacturing, typical of the requirements for large public schemes of the period. The French group France Crowning Art and Industry is another example. A rather late example is the Alexander Hamilton U.S. Custom House in New York City (1901–07), which has large groups for the four continents by the entrance, and 12 figures personifying seafaring nations from history high on the facade. The invention of movable type printing saw Dame Imprimerie ("Lady Printing Press") introduced to the pageants of Lyon, a major printing center, along with "Typosine", a new muse of printing. A large gilt-bronze statue by Evelyn Beatrice Longman, something of a specialist in "allegorical" statues, was commissioned by AT&T for the top of their New York headquarters. Since 1916 it has been titled at different times as the Genius of Telegraphy, Genius of Electricity, and since the 1930s Spirit of Communication. Shakespeare's spirit Ariel was adopted by the sculptor Eric Gill as a personification of broadcasting, and features in his sculptures on Broadcasting House in London (opened 1932). National personifications A number of national personifications stick to the old formulas, with a female in classical dress, carrying attributes suggesting power, wealth, or other virtues. Libertas, the Roman goddess of liberty, had been important under the Roman Republic, and was somewhat uncomfortably co-opted by the Roman Empire; it was not seen as an innate right, but as granted to some under Roman law. She had appeared on the coins of the assassins of Julius Caesar, defenders of the Roman Republic. The medieval republics, mostly in Italy, greatly valued their liberty, and often use the word, but produce very few direct personifications. With the rise of nationalism and new states, many nationalist personifications included a strong element of liberty, perhaps culminating in the Statue of Liberty (Liberty Enlightening the World). The long poem Liberty by the Scottish James Thomson (1734), is a lengthy monologue spoken by the "Goddess of Liberty", describing her travels through the ancient world, and then English and British history, before the resolution of the Glorious Revolution of 1688 confirms her position there. Thomson also wrote the lyrics for Rule Britannia, and the two personifications were often combined as a personified "British Liberty", to whom a large monument was erected in the 1750s on his estate at Gibside by a Whig magnate. But, sometimes alongside these formal figures, a new type of national personification has arisen, typified by John Bull (1712) and Uncle Sam (c. 1812). Both began as figures in more or less satirical literature but achieved their prominence when taken into political cartoons and other visual media. The post-revolutionary Marianne in France, official since 1792, is something of a mixture of styles, sometimes formal and classical, at others a woman of the streets of Paris personified. The Dutch Maiden is one of the earliest of these figures, and was mainly visual from the start, her efforts to repulse unwelcome Spanish advances shown in 16th-century popular prints. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Yitzhak_Wasserlauf] | [TOKENS: 587] |
Contents Yitzhak Wasserlauf Yitzhak Shimon Wasserlauf (Hebrew: יצחק שמעון וסרלאוף; born 14 August 1992) is an Israeli politician who has served as the Minister for the Development of the Periphery, the Negev, and the Galilee since 2025, and currently serves as a member of Knesset for Otzma Yehudit, following the 2022 Israeli legislative election. Biography Wasserlauf was born in the Old City of Jerusalem to a religious Zionist family. He became the deputy chairman of the youth wing of the National Union Party at age 17. He participated in the hesder program during his enlistment in the Israel Defense Forces' Golani Brigade, allowing him to divide his time between yeshiva and army service. Between 2014 and 2019, he taught at the Hesder Yeshiva Oz Ve'emuna in Tel Aviv, which he founded alongside Rabbi Ahiad Ettinger. Ettinger was killed in a shooting near Ariel in 2019. Political career For the September 2019 Knesset elections he was placed fourth in the Otzma Yehudit list, but was not elected to the Knesset as Otzma Yehudit was below the election threshold. Placed second in the 2020 Israeli legislative election, the party again did not pass the election threshold. Prior to the elections for the 21st Knesset he joined Otzma Yehudit. In the elections for the 25th Knesset, he was placed fifth on the list, which won 14 mandates. Wasserlauf was elected to the Knesset and sworn in on 15 November 2022. He was appointed to the Ministry for the Development of the Periphery, the Negev, and the Galilee. Upon being sworn in on 15 November 2022, Wasserlauf became the youngest member of the 25th Knesset. On 17 January 2025 Otzma Yehudit held a press conference in which they announced their intention to withdraw from the coalition if the government accepted the three-phase ceasefire proposal. The proposal was accepted, and Wasserlauf subsequently resigned alongside the rest of the party's ministers when the ceasefire went into effect on 19 January 2025. His term ended on 21 January. Otzma Yehudit has stated its intention to rejoin the governing coalition if the deal does not result in a permanent ceasefire. The cabinet approved Wasserlauf's re-appointment on 18 March, and it was approved by the Knesset the following day. Personal life Wasserlauf lives in a neighborhood in northern Tel Aviv. He previously lived in the neighborhood of Shapira. He is married, and has three children. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-255] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Crux] | [TOKENS: 4337] |
Contents Crux Crux (/krʌks/ KRUKS) is a constellation of the southern sky that is centred on four bright stars in a cross-shaped asterism commonly known as the Southern Cross. It lies on the southern end of the Milky Way's visible band. The name Crux is Latin for cross. Though it is the smallest of all 88 modern constellations, Crux is among the most easily distinguished, as each of its four main stars has an apparent visual magnitude brighter than +2.8. It has attained a high level of cultural significance in many Southern Hemisphere states and nations. Blue-white α Crucis (Acrux) is the most southerly member of the constellation, and at magnitude 0.8, the brightest. The three other stars of the cross appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), and δ Crucis (Imai). ε Crucis (Ginan) also lies within the cross asterism. Many of these brighter stars are members of the Scorpius–Centaurus association, a large but loose group of hot, blue-white stars that appear to share common origins and motion across the southern Milky Way. Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its eastern border. Nearby to the southeast is a large dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca. History The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 AD, the stars in the constellation now called Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his Divine Comedy. His description, however, may be allegorical, and the similarity to the constellation a coincidence. Venetian navigator Alvise Cadamosto in the 15th century made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the carro dell'ostro ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras[a] for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "las guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch. Explorer Amerigo Vespucci seems to have observed not only the Southern Cross, but also the neighboring Coalsack Nebula on his second voyage in 1501–1502. Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515 to 1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole, including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe. Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux. Characteristics Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north, and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 11h 56.13m and 12h 57.45m , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north.[b] In tropical regions, Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, so it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide, and Buenos Aires (the 34th parallel south), Crux is circumpolar, thus always appears in the sky. Crux is sometimes confused with the nearby False Cross asterism by stargazers. The False Cross consists of stars in Carina and Vela, is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross. Visibility Crux is easily visible from the Southern Hemisphere, south of 35th parallel at practically any time of year as circumpolar. It is also visible near the horizon from tropical latitudes of the Northern Hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancún or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars. Due to precession, Crux will move closer to the South Pole in the next few millennia, up to 67° south declination for the middle of the constellation. However, by the year 14,000, Crux will be visible for most parts of Europe and the continental United States. Its visibility will extend to Northern Europe by 18,000, when it will be less than 30° south declination. In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) about 4+1⁄2 times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where it intersects a perpendicular line taken southwards from the east–west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine gauchos are documented as using Crux for night orientation in the Pampas and Patagonia. Alpha and Beta Centauri are of similar declinations (thus distance from the pole) and are often referred as the "Southern Pointers" or just "the Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately south of Crux. Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux, making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky). Features Lacaille gave 13 stars Bayer designations Alpha through Lambda in 1756 and labelled two stars as Alpha and Theta. In 1879, Benjamin Gould added Mu Crucis (Mu1 and Mu2) as he felt the stars were bright enough to warrant their names. Within the constellation's borders, 49 stars are brighter than or equal to apparent magnitude 6.5.[c] The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis. Also, a fifth star is often included with the Southern Cross. Several other naked-eye stars are within the borders of Crux, especially: Unusually, 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus–Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda, and both the components of the visual double star, Mu. Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility. Other well-studied variable stars include: The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions. Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called the Scutum-Centaurus Arm) of the Milky Way. This is the main inner arm in the local radial quarter of the galaxy. Partly obscuring this is: A key feature of the Scutum-Crux Arm is: Cultural significance The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the Southern Hemisphere, particularly of Australia, Brazil, Chile, and New Zealand. Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the national flags of Australia, Brazil, New Zealand, Papua New Guinea, and Samoa. A defaced version appears in the canton on the flag of Niue. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, and the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, Tierra del Fuego and Santa Cruz). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms, and as of July 2015[update], on the cover of Brazilian passports. Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the insignia of the Order of the Southern Cross, and the cross has featured as name of the Brazilian currency (the cruzeiro from 1942 to 1986 and again from 1990 to 1994). All coins of the current[update] (1998) series of the Brazilian real display the constellation. Songs and literature reference the Southern Cross, including the Argentine epic poem Martín Fierro. The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren". The cross gets a mention in the lyrics of the Brazilian National Anthem (1909): "A imagem do Cruzeiro resplandece" ("the image of the Cross shines"). The Southern Cross is mentioned in the second verse of the Australian National Anthem: "Beneath our radiant Southern Cross we'll toil with hearts and hands". The Southern Cross features in the coat of arms of William Birdwood, 1st Baron Birdwood, the British officer who commanded the Australian and New Zealand Army Corps during the Gallipoli Campaign of the First World War. The Southern Cross is also mentioned in the Samoan National Anthem. "Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa." ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.") The 1952-53 NBC Television Series Victory At Sea contained a musical number entitled "Beneath the Southern Cross". "Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached number 18 on Billboard Hot 100 in late 1982. "The Sign of the Southern Cross" is a song released by Black Sabbath in 1981. The song was released on the album Mob Rules. The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation". In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: "Thy Southern Cross the night". A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain. The Petersflagge flag of the German East Africa Company of 1885–1920, which included a constellation of five white, five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the Friends of the former German Protectorates [de] (1956/1983 to the present). Southern Cross station is a major rail terminal in Melbourne, Australia. The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia. The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia. In India, a story relates the creation of Trishanku Swarga (त्रिशंकु), meaning Cross (Crux), created by Sage Vishwamitra. In Chinese, 十字架 (Shí Zì Jià), meaning Cross, refers to an asterism consisting of γ Crucis, α Crucis, β Crucis, and δ Crucis. In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the Emu in the Sky (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the left hand of Tagai, and the stars of Musca as the trident of the fishing spear he is holding. In Aranda traditions of central Australia, the four Cross stars are the talon of an eagle and Gamma Centauri as its leg. Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as Bintang Pari and Buruj Pari, respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as sao Cá Liệt (the ponyfish star). Among Filipino people, the Southern Cross has various names pertaining to tops, including kasing (Visayan languages), paglong (Bikol), and pasil (Tagalog). It is also called butiti (puffer fish) in Waray. The Javanese people of Indonesia called this constellation Gubug pèncèng ("raking hut") or lumbung ("the granary"), because the shape of the constellation was like that of a raking hut. The Southern Cross (α, β, γ, and δ Crucis) together with μ Crucis, is one of the asterisms used by Bugis sailors for navigation, called bintoéng bola képpang, meaning "incomplete house star" The Māori name for the Southern Cross is Māhutonga and it is thought of as the anchor (Te Punga) of Tama-rereti's waka (the Milky Way), while the Pointers are its rope. In Tonga it is known as Toloa ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because Ongo tangata ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as Humu (the "triggerfish"), because of its shape. In Samoa the constellation is called Sumu ("triggerfish") because of its rhomboid shape, while α and β Centauri are called Luatagata (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. Peninsular Malays also see the likeness of a fish in the Crux, particularly the Scomberomorus or its local name Tohok. In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is Melipal, which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" (chaka, bridge, link; hanan, high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as Aganagi angry bees having emerged from the Coalsack, which they saw as the beehive. Among Tuaregs, the four most visible stars of Crux are considered iggaren, i.e. four Maerua crassifolia trees. The Tswana people of Botswana saw the constellation as Dithutlwa, two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-55] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.