text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-85] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Givatayim] | [TOKENS: 1051] |
Contents Givatayim Givatayim (Hebrew: גבעתיים, lit. 'Two hills') is a city in Israel east of Tel Aviv. It is part of the Gush Dan metropolitan area. Givatayim was established in 1922 by pioneers of the Second Aliyah. In 2023 it had a population of 58,557. The name of the city comes from the "two hills" on which it was established: Borochov Hill and Kozlovsky Hill. Kozlovsky is the highest hill in the Gush Dan region at 85 metres (279 ft) above sea level. The city was expanded in the 1930s so that today it is actually situated on 4 hills, Borochov, Kozlovsky, the Poalei HaRakevet ("railroad workers"), and Rambam Hill. History Archaeological remains of a Chalcolithic settlement have been found at the site of what is now Givatayim.[citation needed] The modern town was founded on April 2, 1922 by a group of 22 Second Aliyah pioneers led by David Schneiderman. The group purchased 300 dunams (300,000 square metres (3,200,000 sq ft)) of land on the outskirts of Tel Aviv that became the Borochov Neighbourhood (Shechunat/Shekhunat Borochov), the first workers' neighbourhood in the country. It was named for Dov Ber Borochov, founder of the Poalei Zion workers' party. Later, another 70 families joined the group, receiving smaller plots. The land was purchased with their private savings, but was voluntarily transferred to the Jewish National Fund, which organized Jewish settlement at the time, in keeping with the pioneers' socialist beliefs. Shechunat Borochov is credited for a number of innovations in the early Jewish settlement movement, including establishing the first cooperative grocery store (Tzarkhaniya, "Consumer") that still functioned in the same location into the 1980s. Shechunat Rambam was another neighborhood in what is today known as Givatayim. Rambam used to be more "bourgeois" in the eyes of Borochov's founders, who were considered socialists. Thus, the two neighborhoods used to function differently from an economic viewpoint. Over time, more neighborhoods developed: Sheinkin (1936; named after Menahem Sheinkin), Givat Rambam (1933; named after Maimonides), Kiryat Yosef (1934; named after the biblical figure), and Arlozorov (1936; named after Haim Arlosoroff).[citation needed] All these neighborhoods were merged to form a local council in August 1942. The city was also settled on Al-Khayriyya in April 1948, a former Palestinian village. Givatayim was declared a city in 1959. Education and culture Givatayim has 41 kindergartens, 9 elementary schools and 4 high schools. As of 2018, the city has one of the highest rate of secondary school matriculation in the country. Mayor Ruven Ben-Shachar initiated a special high school exam assistance program that after 3 years resulted in an 11% increase of high school test results in 2010. Thelma Yellin High School for the Arts alumni include Michal Yannai, Ido Mosseri, Tal Mosseri, Shai Maestro, Dikla Hadar, Shira Haas, Ohad Knoller, Ilanit, Mili Avital, Ziv Koren, Yael Tal and Maya Dunietz. Urban development Eurocom Tower is a 70-story skyscraper consisting of four apartment towers and a 50-story office building. A large square connects to surrounding areas with bridges and underground passes. The complex is near Ramat Gan and its Diamond Exchange District. In addition to Eurocom Tower, other high-rise projects are planned for the city. According to former Givatayim mayor Reuven Ben-Shahar, the municipality's policy is to promote high-rise construction on the city's outer edges, while preserving the fabric of residential neighborhoods deeper within the city, including the city center. Current plans for the northwest of the city envision high-rise towers along Katznelson and Aliyat Hanoar Streets near the boundaries of Tel Aviv and Ramat Gan. As part of the redevelopment, Katznelson Street will be colonnaded along its length and a 2-meter-wide cycle paths are planned for both sides of the road, with one lane for buses and another for cars. Mayors Reuven Ben-Shahar was the first candidate from Kadima that won a city election and the first mayor in Givatayim that was not from the Israeli Labor Party. Notable people Twin towns and sister cities Givatayim is twinned with: See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Barry_Barish] | [TOKENS: 1850] |
Contents Barry Barish Barry Clark Barish (born January 27, 1936) is an American experimental physicist and Nobel Laureate. He is a Linde Professor of Physics, emeritus at California Institute of Technology and a leading expert on gravitational waves. In 2017, Barish was awarded the Nobel Prize in Physics along with Rainer Weiss and Kip Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves". He said, "I didn't know if I would succeed. I was afraid I would fail, but because I tried, I had a breakthrough." In 2018, he joined the faculty at University of California, Riverside, becoming the university's second Nobel Prize winner on the faculty. In the fall of 2023, he joined Stony Brook University as the inaugural President's Distinguished Endowed Chair in Physics. In 2023, Barish was awarded the National Medal of Science by President Biden in a White House ceremony. Birth and education Barish was born in Omaha, Nebraska, the son of Lee and Harold Barish. His parents' families were Jewish immigrants from a part of Poland that is now in Belarus. Just after World War II, the family moved to Los Feliz in Los Angeles. He attended John Marshall High School and other schools. He earned a B.A. degree in physics (1957) and a Ph.D. degree in experimental high energy physics (1962) at the University of California, Berkeley. He joined Caltech in 1963 as part of a new experimental effort in particle physics using frontier particle accelerators at the national laboratories. From 1963 to 1966, he was a research fellow, and from 1966 to 1991 an assistant professor, associate professor, and professor of physics. From 1991 to 2005, he became Linde Professor of Physics, and after that Linde Professor of Physics, emeritus. From 1984 to 1996, he was the principal investigator of Caltech High Energy Physics Group. Research Firstly, Barish's experiments were performed at Fermilab using high-energy neutrino collisions to reveal the quark substructure of the nucleon. Among others, these experiments were the first to observe a current that was weak and neutral , a linchpin of the electroweak unification theories of Salam, Glashow, and Weinberg. In the 1980s, he directed MACRO, an experiment in a cave in Gran Sasso, Italy, that searched for exotic particles called magnetic monopoles and also studied penetrating cosmic rays, including neutrino measurements that provided important confirmatory evidence that neutrinos have mass and oscillate. In 1991, Barish was named the Maxine and Ronald Linde Professor of Physics at Caltech. In the early 1990s, he spearheaded GEM (Gammas, Electrons, Muons), an experiment that would have run at the Superconducting Super Collider which was approved after the former project L* led by Samuel Ting (and Barish as chairman of collaboration board) was rejected by SSC director Roy Schwitters. Barish was GEM spokesperson. Barish became the principal investigator of the Laser Interferometer Gravitational-wave Observatory (LIGO) in 1994 and director in 1997. He led the effort through the approval of funding by the NSF National Science Board in 1994, the construction and commissioning of the LIGO interferometers in Livingston, LA and Hanford, WA in 1997. He created the LIGO Scientific Collaboration, which now numbers more than 1000 collaborators worldwide to carry out the science. The initial LIGO detectors reached design sensitivity and set many limits on astrophysical sources. The Advanced LIGO proposal was developed while Barish was director, and he has continued to play a leading role in LIGO and Advanced LIGO. The first detection of the merger of two 30 solar mass black holes was made on September 14, 2015. This represented the first direct detection of gravitational waves since they were predicted by Einstein in 1916 and the first ever observation of the merger of a pair of black holes. Barish delivered the first presentation on this discovery to a scientific audience at CERN on February 11, 2016, simultaneously with the public announcement. From 2001 to 2002, Barish served as co-chair of the High Energy Physics Advisory Panel subpanel that developed a long-range plan for U.S. high energy physics. He has chaired the Commission of Particles and Fields and the U.S. Liaison committee to the International Union of Pure and Applied Physics (IUPAP). In 2002, he chaired the NRC Board of Physics and Astronomy Neutrino Facilities Assessment Committee Report, "Neutrinos and Beyond". From 2005 to 2013, Barish was director of the Global Design Effort for the International Linear Collider (ILC). The ILC is a proposed particle accelerator intended to complement the Large Hadron Collider at CERN by enabling precision studies at the TeV energy scale. The project has involved international coordination among research institutions to develop a common technical design framework for a large-scale particle physics facility. Honors and awards In 2002, he received the Klopsteg Memorial Award of the American Association of Physics Teachers. Barish was honored by the University of Bologna (2006) and University of Florida ( 2007) where he received honorary doctorates. In 2007, delivered the Van Vleck lectures at the University of Minnesota. The University of Glasgow honored Barish with an honorary degree of science in 2013. Barish was honored as a Titan of Physics in the On the Shoulders of Giants series at the 2016 World Science Festival. In 2016, Barish received the Enrico Fermi Prize "for his fundamental contributions to the formation of the LIGO and LIGO-Virgo scientific collaborations and for his role in addressing challenging technological and scientific aspects whose solution led to the first detection of gravitational waves". Barish was a recipient of the 2016 Smithsonian magazine's American Ingenuity Award in the Physical Science category. Barish was awarded the 2017 Henry Draper Medal from the National Academy of Sciences "for his visionary and pivotal leadership role, scientific guidance, and novel instrument design during the development of LIGO that were crucial for LIGO's discovery of gravitational waves from colliding black holes, thus directly validating Einstein's 100-year-old prediction of gravitational waves and ushering a new field of gravitational wave astronomy." Barish was a recipient of the 2017 Giuseppe and Vanna Cocconi Prize of the European Physical Society for his "pioneering and leading role in the LIGO observatory that led to the direct detection of gravitational waves, opening a new window to the Universe." Barish was a recipient of the 2017 Princess of Asturias Award for his work on gravitational waves (jointly with Kip Thorne and Rainer Weiss). Barish was a recipient of the 2017 Fudan-Zhongzhi Science Award for his leadership in the construction and initial operations of LIGO, the creation of the international LIGO Scientific Collaboration, and for the successful conversion of LIGO from small science executed by a few research groups into big science that involved large collaborations and major infrastructures, which eventually enabled gravitational-wave detection" (jointly with Kip Thorne and Rainer Weiss). In 2017, he won the Nobel Prize in Physics (jointly with Rainer Weiss and Kip Thorne) "for decisive contributions to the LIGO detector and the observation of gravitational waves". In 2018, Barish was honored as the Alumnus of the year by the University of California, Berkeley. In 2018, he received an honorary doctorate at Southern Methodist University. In 2018, he was conferred the Honorary Degree Doctor Honoris Causa of Sofia University St. Kliment Ohridski. In 2023, he was awarded the inaugural the Copernicus Prize, bestowed by the government of Poland on "those who made exceptional contributions to the development of world science." In 2023, he was awarded the National Medal of Science for "exemplary service to science, including groundbreaking research on sub-atomic particles. His leadership of the Laser Interferometer Gravitational-Wave Observatory led to the first detection of gravitational waves from merging black holes, confirming a key part of Einstein's Theory of Relativity. He has broadened our understanding of the universe and our Nation's sense of wonder and discovery." Barish has been elected to and held fellowship at the following organizations: Family Barry Barish is married to Samoan Barish. They have two children, Stephanie Barish and Kenneth Barish, professor and chair of Physics & Astronomy at University of California, Riverside, and three grandchildren, Milo Barish Chamberlin, Thea Chamberlin, and Ariel Barish. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-21] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Ministry_of_Education_(Israel)&action=edit§ion=4] | [TOKENS: 1434] |
Editing Ministry of Education (Israel) (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 9 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-177] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-effbot-call-by-object_21-0] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Max_Steel] | [TOKENS: 9573] |
Contents Max Steel Max Steel is an American media franchise produced and owned by the multinational company Mattel. Originally released as a line of action figures marketed from 1999 to 2022, the Max Steel name has expanded to live-action films, animated series, and video games. The original figures based on the first TV series were similar to the original 12-inch G.I. Joe toys, consisting almost entirely of different versions of Max Steel, the main character, and one or two of his enemies, a couple of vehicles and two or three special packages. The original toy series ran from 1999 to 2012. At the end of that period, was substituted by a different series of toys with the same brand name, but with a change in quality and design intended to tie into the companion TV series in 2013. The 2013 line did not exhibit 1/6 scale of the original and reduced the number of articulations and action features of the figures. Max Steel was simultaneously developed into an animated series of the same name, which originally aired from February 25, 2000, to January 15, 2002, followed by nine direct-to-video animated films, being released annually from 2004 to 2012. A reboot aired on Disney XD in the United States, where it had premiered on April 1, 2013. Original toy series, 1999–2012 In 1998, Netter Digital was commissioned by Mattel to create and develop a secret agent themed TV series partially based on James Bond, but aimed to young kids. The series premiered in 2000, but since October 1999, the first toys based on the series hit the shelves. The Max Steel toy line quickly become an instant success, although most of the first toys were completely different from the characters on the series. This may be due to the fact that both, series and toys, were developed simultaneously, but in independent way. Most early Max Steel toys had a notorious military, adventure or sport clothes not related to the series in any way, but surprisingly quite similar to the 70's Mattel's action figure Big Jim. It was also noted that in Latin American market, many Big Jim toys were sold at that time under the name "Kid Acero", which translates as "Kid Steel", a different Mattel toy line who also had a similar plot as Max Steel's TV series. Eventually, Mattel cleared those apparent coincidences in a TV episode in which it is revealed that Max Steel is indeed Big Jim's son, and other secondary character named "Jefferson Smith" is actually "Big Jeff" from the same classical Big Jim toy line, just 20 year older, hence making a connection between both toy lines, thus making this one of the first times a toy company makes a continuation of generations of toys, taking in consideration their history, backgrounds and timeline. After the first year, development of the series went to Foundation Imaging, an animation company which eventually declared a bankruptcy at the end of second series. Mattel then choose Rainmaker Entertainment as the main animation studio, which at the time also was in charge of Transformers: Beast Wars. Following the same formula as Beast Wars, once Mainframe took control of the production, every new toy makes an appearance in the series or the films, so they finally made a match. After the Endangered Species film, every new set of toys includes at least a couple of wild animals as companions of Max too. The original run of the original toy series began in 1999 and ended in late 2012, with the reboot of the TV series starting in early 2013. Since the toy line was intended solely for male kids, despite the constant presence of empowered women in the series as main or supporting characters, no female figures were ever produced. All the initial run of Max Steel action figures were quite different from the TV character, due to the fact the character was in development at the time. Most Max Steel vehicles of this series used to be blue, with yellow or green accents, as the early 80's Big Jim sport and spy series. This particular combination of colors was widely used on all Max Steel toys for waves 1 and 2, despite the fact the main characters in the series wears blue and brown uniforms. It was only until wave 3 when the first Max Steel action figure to be identical as the TV character "Going Turbo!" was released, a year after the initial debut of Max Steel, but in the new Urban Siege sub line instead of the main toy line. Additional to the multiple Max toy versions, only one villain named Psycho was released as the main antagonist, although in the TV series Max had over a dozen different recurrent enemies. Waves 1 and 2 included exactly the same Psycho action figure, with only minor changes in the box. Wave 3 changed Psycho's mechanic arm, for a new spring loaded one, while the rest of the body remained mostly the same. Several large size vehicles, including a jet, a boat and a couple of sport cars were released as part of this initial series. At the end of 2000, Mattel took the decision to divide the Max Steel figures into different sub lines. All sport related toys, for example, would go to "Extreme Sports Adventure," while others would go into "Urban Siege," "Snake Island," etc. Vitriol, a new villain, was produced as the main villain in the sub line Urban Siege, using the same molds used to produce Psycho. Vitriol and Psycho wear the same pants, albeit different colors. The only notorious difference is, while Psycho's right arm is a bionic construction which can be transformed into a claw/laser, Vitriol's arms were green, with light up features. By coincidence, after the September 11, 2001 attacks, all Urban Siege action figures were immediately recalled because they contained "Secret Mission" Cards. Each card contained clues and plots about fictional terrorists attacks on American soil. Children were supposed to use the cards to be "informed of their next mission and stop the terrorists" before they could demolish a building, spread a deadly virus or detonate a nuke bomb, among other tasks. Vitriol and least a couple of cards included the text: "Vitriol is on top of the World Trade Center ready to blast the city with his deadly energy waves! Your Mission: Stop Vitriol before he destroys New York City!". Once any mention of terrorism in New York was eliminated, the toys returned to the shelves, but as part of the main line this time. The Urban Siege series which focused on counter terrorism ended immediately after the recall. Other figures of different lines retained their respective mission cards. Rumors say the real reason of this act of empathy were the low popularity and poor sales of Vitriol, specially because it is the only Max Steel action figure never produced again after its initial release. In 2004, Mattel took the decision to cancel the TV show and create a series of direct to DVD films instead, to be released in an annual schedule, starting with Endangered Species. This also marked a completely new direction of the toy line. Since then, every new set of Max Steel toys makes an appearance in their respective film, a formula also used on Barbie, Monster High and other Mattel products. As a result of this decision, several iconic characters and vehicles which were not included on that specific film were removed from the toyline. The N-Tek minijet and the Sport Coupe which were Max's primary ways of transportation in the TV series didn't make the cut into the films, so the respective toys were discontinued before the film's premiere. Many other elements from the series were also removed from the toys, including any mention of Max' secret identity, "Josh McGrath". Although no toys based on Josh were ever released, he was often mentioned in the toy's packages. In the 2005 film "Forces of Nature" a new villain is introduced in the toy line, an artificial creature named Elementor, who has the ability to emulates 5 different elements (water, fire, air, earth and metal). This allowed Mattel to create several different versions of the same character, as opposite to the previous toy series in which usually only one or two villain figures were produced. At some point, Mattel released 10 different Elementor variations, 2 for each element at the same time. For the second wave, all Max figures were reassigned to a specific "World of...", with arms, equipment and accessories designed to challenge each specific Elementor's version. Thus, in the "World of Water", Max is dressed as a diver or surfer, uses a surf board, a boat and battles Water Elementor only. In "World of Air", he has a parachute, a jet pack, and battles Air Elementor, and so on... The World of elements toy series lasted for 3 years, with a total of 6 waves. In 2007, Mattel dropped the Going Turbo! concept which was present from the beginning of the series as Max's battlecry (and was also used to transform into his superhuman form), for the Adrenalink one. The most notorious change in the toy packages it is that the Max Steel logo was changed from yellow to green. The Adrenalink subline includes almost all sport and adventure versions of Max not related to Elementor. In mid 2007, Mattel released for the first time "Max gear" to be dressed by kids. The toys included a retractile Ninja Sword, Max's communicator, Max's suit with battle sounds and light, night vision glasses, binoculars and other stuff alike. This line introduces a new villain in the toy series. The release of the first Extroyer action figure was announced as a special event, released only a couple of weeks before the film Dark Rival premiered on Cartoon Network Philippines in late 2007. The original package contained both Max and Extroyer action figures, but Extroyer's face and body were mostly covered with the package's artwork, so no one could tell for sure how he looked like. Some time later the same Extroyer was available as a stand-alone figure, this time uncovered. All action figures related to Dark Rival film and this new villain are often denoted by the word "Extroyed" in front of the package, with new purple or dark logos. The last wave of the Extroyed series also included a Max crime partner for the very first time. Although in all media Max was supported by a large cast of allies, only an android named Cytro, made it to the action figures. Mattel made it a special event, and reused the same mystery double pack used for Extroyer's initial release. Starting from this point, several additional figures of Cytro were also released along the years until the end of the original run of Max Steel. Released for the first time in 2009, these three different series separated the main toy line into numbered themed missions, in a similar way as the Urban Siege line intended to separate the military-themed figures from the sport ones in 2000. The "Animal Encounter" subset grouped all animal and wildlife related toys with Extroyer as the main villain, while the "High Voltage" subset was mostly focused in water and lightning versions of Elementor. A third subset named "N-Tek Invasion" simply grouped all other items which do not belong to the previous two. The Turbo Missions packages also had large set numbers (1, 2, 3) in reference to their respective mission. In 2010 a second wave of Turbo Missions action figures was released, but this time the themes were "Bio-Threat", "Cyber -Attack" and "Night Strike". Similarly as the previous ones, Bio-Threat grouped all earth pollution related adventures, while Night Strike showcased glow in the dark action figures. Cyber attack was mostly the same as N-tek Invasion, grouping all other enemies focused on causing mayhem inside N-Tek's headquarters. In 2009, Mattel also released an Earth's protection and conservation themed series of toys, whose main villain was a pollution based monster formerly named ToxZon. This included toys with spring water and ooze dropping features, plus a few light and sound toys. This toys mentioned in the packages phrases dedicated to recycling, sustainability[clarification needed] and green solutions for the planet. Although focused on ToxZon, later waves of this toys also included new and last versions of Cytro, Extroyer and Elementor, and ended with the final release of a new villain named Makino in 2011. This was also the end of the original run of Max Steel toys.[citation needed] Reboot toy series, 2013–2022 In 2013 Mattel finished its relationship with Mainframe, and decided to reboot the TV series after 13 years. As part of this decision, Playground Productions, Nerd Corps Entertainment and FremantleMedia Kids & Entertainment created a completely different origin story, and Mattel produced new toys based on the new designs, notoriously different from the original ones. The most notable change was the drop of toy's clothes and wearable accessories. While the original action figures came with fabric made vest and pants and detachable or snap-on accessories, -thus allowing to undress or dress up the figures with additional gear-, the new ones had no clothes at all, and all their features were directly molded onto the figures surface. This also allowed Mattel to cut down the cost of production of the action figures. The new figures also lacked of spring, sound or light features, which were not included until 2015, but in limited figures only. The new reboot series was also notorious for the release of Max's allies for the first time. In the original toy line, only Cytro was produced as Max's crime fighter companion. In the reboot series, additional to Cytro, Forge Ferrus, Ven-Ghan and La Fiera action figures were released. Also another notorious change it was the New Max Steel figures from 2013 to 2016 were molded and shaped as a 16-year-old boy, notoriously younger and less muscular than the original ones. But figures made after 2016 were retooled to emulate an 18-year-old boy, making this the very first time Max showed some aged in both, toys and series. The later action figures also changed Max's face, making it more similar to the original from 1999. The toyline was discontinued in 2022.[citation needed] Comic books When the first Max Steel toys were released in 1999, Mattel distributed a free 12 page comic book titled Take it to the Max to introduce the character to children. The comic was written by Richard Bruning based on the sourcebook by Andy Hartnell, and penciled by Scott Benefiel with Tom McWeeney and Tommy Yune. Inkers were Jasen Rodriguez, Tom McWeeney and Lucian Rizzo. Until now there are four known language versions of this comic: English, Spanish, Italian and Greek. The English version was distributed mostly in America and United Kingdom, the Spanish version was largely distributed on Latin America and Spain, while the Italian one was obviously distributed in Italy and the Greek one in Greece, especially through the Modern Times superhero comics. The comic consists mostly of two different briefing presentations which are supposed to happen at the same time in DREAD and N-Tek headquarters respectively, in which each CEO explains Max's abilities, powers and capacities from their opposite points of view. While Jefferson Smith presents Max as a great tool to counter terrorism, John Dread considers Max as a Major threat. In the background, while hearing Jefferson's presentation, Josh remembers the accident which transformed him into Max Steel and adds complementary information about his personal life not mentioned by Jefferson or Dread in the briefings. Laura Chen also makes her first public presentation in the comic. In the last pages, the comic also contains biographies of all main characters: Max, Jefferson, Dread, Psycho, Rachel and 'Berto. Most of the facts mentioned in the comic are present and developed in the TV series first season. There are some details which are slightly different though. The comic is supposed to be based on Mattel's Max Steel Sourcebook and since the creative control of the TV series changed three times due to the bankruptcy of the two initial animation studios, it is possible that few of the hints mentioned in the comic never had a chance to be developed in the series, or maybe they were discarded in favor of further development of the characters. Among the most notorious differences, it is mentioned that N-Tek's founder and original CEO was Jim McGrath, -Max's father-, instead of Marco Nathanson. John Dread does not wear glasses as he always does in the series, and it is mentioned that 'Berto is a traditional die hard computer nerd with no experience with girls or real life at all, (in the series he does has a life, and he is even a regular skilled fighter). The comic also insinuates the possibility that Psycho is Max's biological father, and they may have a relationship similar as Luke Skywalker and Darth Vader from Star Wars. This plot in particular was never mentioned in the TV series or films, but the 2004 film Endangered Species includes a scene in which Psycho defeats Max, and offers him join forces to rule mankind together. This scene is extremely similar as the one represented by Luke and Vader in The Empire Strikes Back. In 2006 Mattel released a 4 issue series of mini comics, each four pages long. Each comic described an encounter between Max and a specific version of Elementor (Earth, Fire, Water and Air, respectively). The mini comics were used mostly as brochure to introduce Elementor to kids. In 2013, VIZ Media's all-ages imprint, Perfect Square, released The Parasites, the first in a series of a full-length graphic novels featuring a revamped Max Steel universe that tied into the new television series. Edited by former Naruto manga editor, Joel Enos, the first graphic novel was written by The Stuff of Legend co-creator Brian Smith and illustrated by Jan Wijngaard. The next two books in the series, Hero Overload and Haywire were released in 2014 and featured stories by writers B. Clay Moore and Tom Pinchuk, as well as more artwork from Wijngaard and Voltron Force artist, Alfa Robbi. TV series Netter Digital Entertainment produced a TV series of the same name, based on a 19-year-old college student named Josh McGrath, who has super powers and can transform into the powerful Max Steel. The series starts with the episode "Strangers"; Max and his partner Rachel Leeds are at a UN meeting when Rachel and all attendants disappear — except Max, who was on the roof watching, something Rachel scolded him for. Later, Dr. Roberto Martinez finds out that the floor inside rotated, sending everyone in the room into a hidden chamber under the floor, while a new floor spun into place. The enemy is later revealed to be L'Etranger, and he has taken the UN Representatives as hostages and is escaping on a train with them. Max and 'Berto follow Rachel's tracking signal, hidden in her earring, to the train. Max fights L'Etranger on top of the train, but in the middle of the confrontation, his enemy is knocked off the train. The first episode doesn't give much background on who Max Steel is or what he does, only that he has a double-identity, works for a secret organization, and is super-powered. Until the third episode, "Shadows," it is revealed through a series of flashbacks how Josh became Max Steel: Josh apparently fell asleep outside of N-Tek while visiting his father. He hears someone break down a back door into N-Tek, so he follows the man down an elevator. Both of them are caught by N-Tek security agents, but the intruder, who is revealed to be "Psycho," Max' future enemy, in human disguise, defeats the agents. Josh then follows Psycho into a room where he sees Psycho stealing N-Tek Nanoprobes. Josh and Psycho battle for a moment, and in the middle of the fight, Josh kicks Psycho in the face, revealing his metal skull-like face. Josh panics, and Psycho fires his laser at Josh, but hits the glass holding the nanoprobes instead, causing the container to burst, sending the probes onto Josh's body. Jefferson finds Josh in an extremely weak condition, infected by the probes. 'Berto tells them that the probes are dying, causing Josh to die with them. They both need transphasic energy to survive, so they put Josh inside the transphasic regenerator (a machine capable of regenerating the nanoprobes). This procedure saves Josh's life, but also gives him powers boosted by the probes, now synchronized with his body. Josh confronts Jefferson to let him work at N-Tek but his stepfather refuses. Then Josh tells him that "Josh McGrath is out of the picture", and transforms into Max Steel. The first season lasted 13 episodes. After that, Netter Digital had gone bankrupt, so Foundation Imaging took over Season 2. For similar reasons, Season 3 was developed by Mainframe Entertainment. The third season also took a different approach: N-Tek's counter-terrorism section is shut down due to the events present in "Breakout," the 2nd-season finale, so the main characters become extreme sports stars. One could state the plot resembled that of the CGI Action Man series (which was Max Steel's main competition at the time of airing). Season 3 would end up being the final season of Max Steel, ending with the episode, "Truth Be Told." About two years after the series ended, a film, titled "Endangered Species," was released direct-to-DVD. A Max Steel film was released every year from 2004 up until 2012. However, the films took a different approach; while the series focused on chasing terrorists, secret agents, and super-enhanced humans, the film's plots were focused on fighting superpowered mutants and monsters. The films also almost never mention Josh McGrath, giving the impression he was forgotten, which was probably a continuity mistake. A re-imagining of the first series, which has the same title, premiered on Disney XD March 25, 2013. In this re-imagining, Josh McGrath no longer exists, and he is known as Maxwell "Max" McGrath. Also, Max Steel is no longer one person, but two; Maxwell and an Ultralink named "Steel." Maxwell (who is a Tachyon-Human Hybrid) has the ability to generate Tachyon Unlimited Radiant Bio-Optimized, or TURBO energy, however, he cannot contain it, as it causes him to become unstable. However Steel, the Ultralink, has the ability to merge with Maxwell, which helps Maxwell stabilize his turbo energy. After the two combine forces, "Max Steel" is created. Max Steel was nominated at the British Academy Children's Awards in the category "BAFTA Kid's Vote – Television". Movies Usually, Max Steel films are offered as bonus gifts with the purchase of other products, and are not available in other ways. In 2003, in the United States, if you selected Max Steel action figures you would get a free Max Steel: Endangered Species DVD. Countdown was included free in the largest playsets of the toy line in the '06 Christmas and as a free gift by buying a Happy Meal during November in Latin America only. In Mexico, Dark Rival was available inside an ActII Popcorn special promo pack at the end of 2007 and early 2008. Bio Crisis premiered with no advertising at all, except a brief announcement to the press in a particular interview in Mattel's regional headquarters. The film was immediately available as a bonus gift with the purchase of different products, but only at very specific locations, since at the same time, Dark Rival and Forces of Nature were relaunched, as part of the countdown to Max Steel's Tenth Anniversary Celebration. Several new characters (not present in the original TV series) were introduced in the films. Max Steel is Josh McGrath, an amateur extreme sports athlete, and Special Agent of the N-Tek corporation. By becoming a Max Steel and using his Turbo Mode he is granted greater speed and strength. Among the most advanced animation technology, music and effects, he participates in the Far Challenge the Americas, one sporting event that brings together top extreme athletes of the continent and tests on location in Argentina and Brazil. During the development of the plot, Max Steel takes control and responsibility of its powers and takes a clear leadership position by having to help their friends, who are attacked by a scorpion. Bioconstrictor and Psycho, his two worst enemies have joined forces to defeat him in an adventure that takes them to Peru and Ecuador, to the ruins of the Inca civilization. Meanwhile, Max will partner with a smart jaguar who becomes his best friend and fights with him against the villains. Forces of Nature is the only film whose title was changed in Latin America. It was released under the name "El dominio de los elementos" (Element's Dominion). All other films kept their respective names, even translated in different languages. In this film, Jefferson Smith gets back to Bio-Con's abandoned base. There he finds several of Bio-Con's animals in stasis, most of them failed experiments, with one exception: one of them, codenamed Elementor, wakes up and escapes from its container. Elementor looks up for five different Elementium isotopes, which Bio-Con originally used to experiment and mutate him with the intention to create a creature much more ferocious than himself. Each isotope grants Elementor the power to control and mimic one specific element: Earth, Water, Wind, and Fire. Once in possession of these 4 elements, the power to control Metal and Ice is granted as an extra bonus. One by one, Elementor absorbs each isotope and gains new powers. Then Jefferson puts Max under arrest without further explanations, but Max is able to break free when Elementor attacks N-Tek's headquarters looking for the last isotope. Max and Jeff find a way to escape but before they can leave the area, Jefferson reveals to Max that years ago when he was transformed into "Max Steel", in order to save his life the fifth isotope was placed inside his body, and that the arrest was just an effort to hide him from Elementor, since nobody knows for sure what would happen if the isotope is extracted from Max's body. After a brief confrontation with Elementor, Berto and Kat discovers that the fifth Isotope makes the others go haywire instead of adding new powers to its wearer, so Max decides to confront Elementor instead of running away. At the final battle, Max releases the power of the fifth isotope until its overcharge causes a reaction that destroys Elementor. After a battle against Psycho's remaining androids, Max discovers that Elementor (after having been destroyed over a year ago) has survived as an unstable power form. Elementor invades N-Tek, takes over Jeff's body, and goes to the Transphasic Generator in an attempt to use it to reconstitute his physical body. 'Berto reverses the power, and accidentally forces Elementor to divide himself into his different versions (meaning, Max has to face 6 different beasts, each one with a different power). Max is attacked by the Elementor's, and is injured by them. 'Berto uses his updated Nano-probes to save Max' life. However, it is revealed that Kat is infected with Elementium, and is dying. Now, the Elementor's, each with a mind of its own, and controls only its respective element, work as a team and try to take over the planet. However, Max, using the new steel Nano-probes, tricks Elementor by telling him that, if the world is going be controlled by the monsters, he prefers to destroy the Earth instead. All the Elementors attempt to kill Max, but instead fall into his trap. Max, 'Berto, and Kat (poisoned with Elementium) manage to reunite them all in a desert wasteland. 'Berto reconstructs the "Imploder," a blackhole device found in Psycho's base at the beginning of the film, now to only affect Elementium. The process nukes Elementor and strips all the isotopes from his body, leaving him in his original Bio-Con duplicate state. It also takes the Elementium out of Kat's body, saving her. Unknown thefts of N-Tek property have Max Steel on the tail of a new super enemy, Troy Winter, who claims to be superior to Max in every sense. The chase is on when Team Steel realize Troy's goal is to obtain a piece of a comet named Morphosos using the stolen N-Tek technology and deliver it into Warren Hunter's hands. During a battle with Max, Troy falls into a volcano with a piece of the comet. The chemical reaction between the extreme heat and the comet's components transforms Troy into a sharp dark mineral crystal like creature, with the power to "extract" other living being's life force and abilities. Troy then adopts the name of Extroyer and attacks N-Tek headquarters. In the middle of this confrontation, Elementor is once again released. Extremely weak, Elementor chases Extroyer seeking the comet fragments as a new source of power, but he is "extruded" and defeated while Extroyer takes over his ice form (which he used to confront and battle Max at Eclipse Towers) and becomes a glass blue crystal-like frozen elemental called X-Elementor but soon he is beaten by Steel using a gun with Morphosos-seducing Nano-cubes. Troy takes 'Berto, Kat and Jefferson as hostages and forces Max to obey him. Extroyer uses N-Tek's stolen magnets powered by Max to redirect the comet Morphosos near Earth, so he can take as much crystal fragments as he wants, but it's too late when he realizes it was all a setup, and he's sent into deep space instead, stuck into the comet's surface. Max has to investigate a contaminated jungle, and travels from outer space to the center of the earth in his quest to unveil this mystery. At the beginning of the story, it is mentioned that the last battle against Extroyer has permanently crippled the Adrenalink system, forcing Max to go back to an updated version of Going Turbo!, (A complete explanation of this new energy system appears in Turbo Missions Episode 12: Relaunch). In this film, a new enemy, the nefarious Doctor Grigor Rendel makes his first presentation. It is revealed that Iago has been working for him from the beginning, secretly stealing technology from Eclipse. Rendel has constructed an android named Cytro, whose prime directive is to help him in his plans to take control of the contaminated jungle and destroy Max in the process. Accidentally, the programming of Cytro is scrambled and for a couple of hours thinks he must protect Max instead of fighting him. However, he is aware of the malfunction, and constantly mentions how much time is left until he is "authorized" again to kill Max. Thanks to the information retrieved by Iago, Dr. Rendel locates Elementor immediately after the battle in Dark Rival, taking advantage of his unconsciousness, taking him prisoner, to perform new experiments to repower him. In an effort to synthesize Morphosos Crystals, Dr. Rendel uses fragments recovered from Extroyer's body, partially contaminated with Troy Winter's DNA. As a result, it creates an Extroyer clone. Taking his opportunity while Max is busy fighting the Fire Elementor, the clone absorbs the comet fragment into his body and becomes a giant monster. Despite the fact that he is currently in "evil mode", Cytro makes one last supreme effort to stop him and reverts the effect of the crystals, causing an explosion that reduces both to smithereens. Rendel is arrested, as Max finds Cytro's memory core and gets Berto to rebuild him. After being reconstructed, Cytro becomes Max's mission partner, but now both are placed under the direct orders of Forge Ferrous, a new N-Tek field commander, instead of Jefferson. This new boss is a control freak with an aggressive and all-for-the-team attitude which contrasts with Max's free spirit, causing several conflicts. In response to an emergency call, Max and Cytro are sent to a subterranean lab in Antarctica, which is actually a prison for an unstable N-Tek agent who suffers a heavy mutation due to heavy exposure to chemical contamination. After fighting several "toxoids" (little mutant creatures born from chemical waste) and directly disobeying Ferrours's orders, Max gets into the prison level, thinking he can save the injured agent, just to discover it's just a scheme to free him. The agent is then revealed as Titus Octavius Xander, aka Toxzon, a mutant who consumes and manipulates toxic substances, sealed in a Nanotech armor similar to Max's Nano-Suit, but more primitive and bulky. With his vast knowledge of N-Tek fighting techniques and hazardous powers, Toxzon defeats Max and Cytro and escapes, trying to locate and destroy N-Tek headquarters to contaminate the world in retaliation for what he considers a long time in prison and suffering, refusing to accept his incarceration was a desperate effort to save his life since his mutated body is not capable of surviving in a clean environment without the help of his containment armor. To combat Toxzon, Max undergoes a procedure that increases his body's Turbo Fuel capacity and a brand-new nano-suit with ten times more power, allowing him to battle Toxzon on equal footing. During the final battle, it is revealed the same machine which caused Toxzon's mutation is still working, now packing radioactive material as it was initially intended. Toxzon reconfigures the device to make openings in the nano-pyramids so he can absorb it, increasing his powers. During his encounter with Max, he overpowers Max and tries to make him fall into the device, but Max knocks off Toxzon's protective face mask and kicks him into the machine, trapping him in a nano-pyramid, which becomes his new prison. After Toxzon's capture, Max and Cytro are sent to space to detonate and destroy the Morphosos comet once and for all. In the middle of their mission, they find Troy Winters trapped inside the comet. Somehow, the comet radiation has purified the Morphosos crystals within him, curing him of his Extroyer state and reverting him to a normal human being, but still retaining his power to extract the life force of others. His memory has also been wiped, so he has no memory not only of the entire Extroyer episode (as appeared on Dark Rival) but also of any negative feelings against Max Steel, even considering himself a long-time friend of his. On Earth, Max is initially trustful, but Forge distrusts Troy due to his experience with Toxzon and orders him to be on 24-hour watch. In the N-Tek prison, Toxzon realizes he can use some of his toxoids to re-contaminate Troy's body when Troy extracts their life force, allowing him to use his powers to turn him back into Extroyer, now under his control. Toxzon also frees other N-Tek prison inmates to increase the chaos and leaves the place in the company of Elementor. The trio lands in a major US city, where Toxzon convinces Elementor to transform into a giant Air mass, powers him up with a new isotope stolen by Extroyer, and then contaminates him to produce a sizeable poisonous cloud that will spread all over the world, erasing all life on the planet. Max purifies Extroyer with his Turbo Powers, reverting him into Troy, and convinces him to extract the storm's power from Toxzon, weakening him enough for Max to defeat him, while Cytro captures the now-mindless Elementor. Troy goes off on his own, needing to control his powers, and wishes Max farewell. However, a news reporter named Mike Nickelson is mutated by the fallout of the toxic cloud, transforming him into a scrap metal monster. A news reporter named Mike Nickelson who suffered a mutation due to a radioactive contamination caused during a battle between Max Steel and the Toxic Legion comes back with a vengeance. Blaming N-Tek for his current condition, Nickelson, who now calls himself Makino, tries to capitalize on the fame and notoriety Max has gained as a people's hero to turn public opinion against him. Makino uses his newfound power to control machinery to cause a satellite accident that burns to ashes a ghost town, but "leaks" to the media that N-Tek was responsible for it and releases a digitally altered version of the incident, which causes the group to be in the middle of a legal investigation for its covert operations. During the process, Berto is detained by local authorities. Taking advantage of the situation, Makino kidnaps him and forces him to reveal the secret of N-Tek nanotechnology which allows Max to hyper-compress weapons and spy equipment to add them to hil arsenal, now allowing him to absorb and partially reconstruct himself into vehicles. Since Makino can partially transform himself into a battle machine, Cytro is upgraded with transforming abilities, (similar to those of Transformers) which allows him to change into a giant robot, and later a tank. After Berto's successful rescue, Makino publicly challenges Max to an ultimate fight to determine who's the real protector of the people: in the same stadium Berto was held prisoner, both contenders will have to fight while being watched by the world, and demonstrate their true motivations and reasons to fight for mankind. However, the challenge is a scheme to ruin N-Tek's and Max's reputations. Thanks to his expertise as a media reporter, Nickelson delays and edits the "live broadcast", so the audience sees him as a hero. Cytro leaves Max to fight alone, but teams up with Berto to disrupt the computer's systems and connect the stadium's camera's to the internet and TV satellites around the world so that everybody can learn the truth behind Makino's plot. Max manages to defeat Makino and remove the hard drive and power core within his chest, leaving him powerless. With the defeat and public confession of Makino, N-Tek's name is finally cleared and Makino is sent to prison. Max's mission to retrieve a dangerous device in a violent storm is foiled by a mysterious agent. Meanwhile, two of his old rivals, Toxzon and Makino are brewing up trouble of their own on a prison transport ship. Max gets teamed up with Jet Ferrus, an N-Tek cadet with a secret agenda and a history of rebellious behavior, and Forge's daughter. While Elementor distracts Max with a mid-air attack, Toxzon and Makino start the long journey toward Toxzon's secret lair to work on the next phase of Toxzon's diabolical plan. Max, Jet, and Cytro dive deep into the depths to find Toxzon's secret base, encountering trap after trap to chase after the villain. Meanwhile, Toxzon explains his plan to have Makino take over N-Tek's brand-new aerial battle fortress, the Warden. To enact his plan, Toxzon uses an archaic Cyclotron to enhance Makino's power to control machines. Meanwhile, Max, Cytro, and Jet chase encounter both Toxzon and Elementor, who try their best to stop them from leaving the lair alive. With Makino empowered and ready to rock, Toxzon puts his plan into motion to take over the Warden and poison the world. Meanwhile, Max, Cytro, and Jet attempt to board the cruiser and stop Toxzon's plans. After taking down Makino, Max, and Jet face off against Toxzon in a final fight. Paramount Pictures planned to remake Max Steel as motion picture. Originally, Taylor Lautner had been confirmed to star in the lead role as Josh McGrath. However, by March 2010, Lautner had dropped out of the film in favor of Hasbro and Universal's Stretch Armstrong. Due to the relaunch of Max Steel in 2013, all plans for a live action film were suspended. On August 2, 2013, it was revealed that Dolphin Entertainment were working on a Max Steel film. Christopher Yost was announced as writer, whereas Stewart Hendler was confirmed as director. The film follows along the plotline of the reboot and not the original series'. The film was distributed by Open Road Films and was originally planned for released in 2014. On February 6, 2014, the studio had cast Ben Winchell as Max Steel and Ana Villafane as his love interest Sofia Martinez. On April 29, 2014, actor Andy Garcia was cast in the role of Dr. Miles Edwards, a brilliant and mysterious scientist. On May 20, 2014, actor Mike Doyle was cast in a role. The film was released in the United States by Open Road Films on October 14, 2016, and was a critical and commercial failure. Video games Max Steel's alter ego In the TV show's initial run, Josh McGrath was a white, blond 19-year-old. After the accident in which the N-Tek nanoprobes infected his body, he gained the ability to transform himself into a dark-haired athletic adult, older and stronger than Josh. This grown-up alter ego is Max Steel. Contrary to most heroes with secret identities, Josh's wasn't an issue except with his personal relations, mostly with his girlfriend and fans from the Extreme Sports Circuit. At different times, both Psycho and Dread witnessed Josh's transformation into Max from a remote location and don't show any surprise about it. In at least one episode, Josh transforms in Psycho's presence. In another episode, Psycho chases Josh, forces him to separate from his friends and obligates him to transform into Max Steel. It is not clear how many of Max's enemies knows about his secret identity. At the end of the first season, it is revealed that one of the most trusted Directors of N-Tek, Jean Mariot, was really a DREAD mole, which explains how their members knew about Max's true identity, but does not explain why Max himself never cares about keeping his identity a secret. In the Season 2 episode "Old Friend, New Enemy," Max personally reveals to Bio-Con his true identity. In the third series, due to a combination of factors- the change of the creative design team, a change in the animation company, and the lack of market in the US- all companies involved in the production of Max Steel gradually dropped the secret identity concept, but still had them going on "secret missions", with little-to-no help from N-Tek. The films, released after the TV show ended, took a different approach: Officially, his Josh identity was dropped and Max does not transform at all. This decision wasn't difficult to take, since all Max Steel merchandising and licensed products, promos and advertising campaigns always portrayed him as a brunette and no licensed products were ever produced with his secret identity. The "Josh" as secret identity concept was present only on the TV show and the first film, Endangered Species, although Josh is also seen in a flashback scene in "Forces of Nature," when Jeff reflects on when the creation of "Max Steel" took event. By 2006, "Josh McGrath" was almost forgotten, and all references to him and his personal life were removed from the main story. Even in normal situations, the character was still referred to as Max. Apparently, Josh was slowly phased out until he was simply eliminated from the story continuity, which was readjusted to eliminate any trace of him. Any mention of Josh was removed from action figure boxes and all other licensed merchandising. The 2005 film Forces of Nature shows Josh for the final time in a flashback in which the origin of Max Steel is briefly explained for new audiences. The 2008 film Bio Crisis recreates the very same scene when Psycho infects Max with the N-Tek's nanoprobes, followed with a brief explanation of the process used to save his life. However, this time, it is stated that Max Steel (not Josh) is the victim. The character in scene is Max Steel indeed, with his exact "Max" appearance. This was most likely a continuity mistake. In the 2013 TV series reboot, the main character is renamed Maxwell MacGrath, "Max" (for short) by family and friends. When in "turbo mode" he is known as Max Steel because he combines with an Ultralink named Steel. Logo color Over the years, Max Steel's logo has varied, while still keeping the same style, the color has varied. The logo represented in the TV series featured the words "Max" in orange. It also had smaller letters within the letters, which also read "Max." These were a burnt orange. However, some toys sported different colors; some had the words "Max" in red on the top, and faded to yellow at the bottom; with black words within reading "Max," while others were a more yellow, with less red, and featured the words "Max" within orange. However, some were just similar to the TV series'. However, for the first film, the color of "Max" was changed to blue, and stayed that way through "Forces of Nature." After the Adrenalink system was introduced, the color of "Max" was changed to green, and featured a lightning bolt through the words (replacing the original "Max" letters within the word "Max"). The green logo was used until 2013, when Mattel decided to reboot the franchise. For the 2013 reboot, the logo was redesigned. Now, the "Max" is silver (as opposed to the yellow/orange of the original), while the "Steel" is a glowing blue (as opposed to silver). References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-86] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mithraic_mysteries] | [TOKENS: 12586] |
Contents Mithraism Mithraism, also known as the Mithraic mysteries or the Cult of Mithras, was a Roman mystery religion focused on the god Mithras. Although inspired by Iranian worship of the Zoroastrian divinity (yazata) Mithra, the Roman Mithras was linked to a new and distinctive imagery, and the degree of continuity between Persian and Greco-Roman practice remains debatable.[a] The mysteries were popular among the Imperial Roman army from the 1st to the 4th century AD. Worshippers of Mithras had a complex system of seven grades of initiation and communal ritual meals. Initiates called themselves syndexioi, those "united by the handshake".[b] They met in dedicated mithraea (singular mithraeum), underground temples that survive in large numbers. The cult appears to have had its centre in Rome, and was popular throughout the western half of the empire, as far south as Roman Africa and Numidia, as far east as Roman Dacia, as far north as Roman Britain,(pp 26–27) and to a lesser extent in Roman Syria in the east. Mithraism is viewed as a rival of early Christianity.(p 147) In the 4th century, Mithraists faced persecution from Christians, and the religion was subsequently suppressed and eliminated in the Roman Empire by the end of the century. Numerous archaeological finds, including meeting places, monuments, and artifacts, have contributed to modern knowledge about Mithraism throughout the Roman Empire.[c] The iconic scenes of Mithras show him being born from a rock, slaughtering a bull, and sharing a banquet with the god Sol (the Sun). About 420 sites have yielded materials related to the cult. Among the items found are about 1000 inscriptions, 700 examples of the bull-killing scene (tauroctony), and about 400 other monuments.(p xxi) It has been estimated that there would have been at least 680 mithraea in the city of Rome.[full citation needed] No written narratives or theology from the religion survive; limited information can be derived from the inscriptions and brief or passing references in Greek and Latin literature. Interpretation of the physical evidence remains problematic and contested.[d] Name The term "Mithraism" is a modern convention. Writers of the Roman era referred to it by phrases such as "Mithraic mysteries", "mysteries of Mithras" or "mysteries of the Persians".[e] Modern sources sometimes refer to the Roman religion as Roman Mithraism or Western Mithraism to distinguish it from Persian worship of Mithra.[f] Etymology The name Mithras (Latin, equivalent to Greek Μίθρας) is a form of Mithra, the name of an old, pre-Zoroastrian, and, later on, Zoroastrian, god[g][h] – a relationship understood by Mithraic scholars since the days of Franz Cumont.[i] An early example of the Greek form of the name is in a 4th century BCE work by Xenophon, the Cyropaedia, which is a biography of the Persian king Cyrus the Great. The exact form of a Latin or classical Greek word varies due to the grammatical process of inflection. There is archaeological evidence that in Latin worshippers wrote the nominative form of the god's name as "Mithras". Porphyry's Greek text De Abstinentia (Περὶ ἀποχῆς ἐμψύχων), has a reference to the now-lost histories of the Mithraic mysteries by Euboulus and Pallas, the wording of which suggests that these authors treated the name "Mithra" as an indeclinable foreign word.[j] Related deity-names in other languages include: In Sanskrit, mitra is an unusual name of the sun god, mostly known as "Surya" or "Aditya", however. Iranian Mithra and Sanskrit Mitra are believed to come from the Indo-Iranian word mitrás, meaning "contract, agreement, covenant". Modern historians have different conceptions about whether these names refer to the same god or not. John R. Hinnells has written of Mitra / Mithra / Mithras as a single deity, worshipped in several different religions. On the other hand, David Ulansey considers the bull-slaying Mithras to be a new god who began to be worshipped in the 1st century BCE, and to whom an old name was applied.[m] Mary Boyce, an academic researcher on ancient Iranian religions, writes that even though Roman Mithraism seems to have had less Iranian content than ancient Romans or modern historians used to think, nonetheless "as the name Mithras alone shows, this content was of some importance".[n] Iconography Much about the cult of Mithras is only known from reliefs and sculptures. There have been many attempts to interpret this material. Mithras-worship in the Roman Empire was characterized by images of the god slaughtering a bull. Other images of Mithras are found in the Roman temples, for instance Mithras banqueting with Sol, and depictions of the birth of Mithras from a rock. But the image of bull-slaying (tauroctony) is always in the central niche.(p 6) Textual sources for a reconstruction of the theology behind this iconography are very rare.[o] (See section Interpretations of the bull-slaying scene below.) The practice of depicting the god slaying a bull seems to be specific to Roman Mithraism. According to David Ulansey, this is "perhaps the most important example" of evident difference between Iranian and Roman traditions: "... there is no evidence that the Iranian god Mithra ever had anything to do with killing a bull."(p 8) In every mithraeum the centerpiece was a representation of Mithras killing a sacred bull, an act called the tauroctony.[p][q] The image may be a relief, or free-standing, and side details may be present or omitted. The centre-piece is Mithras clothed in Anatolian costume and wearing a Phrygian cap; who is kneeling on the exhausted bull, holding it by the nostrils(p 77) with his left hand, and stabbing it with his right. As he does so, he looks over his shoulder towards the figure of Sol. A dog and a snake reach up towards the blood. A scorpion seizes the bull's genitals. A raven is flying around or is sitting on the bull. One or three ears of wheat are seen coming out from the bull's tail, sometimes from the wound. The bull was often white. The god is sitting on the bull in an unnatural way with his right leg constraining the bull's hoof and the left leg is bent and resting on the bull's back or flank.[r] The two torch-bearers on either side are dressed like Mithras: Cautes with his torch pointing up, and Cautopates with his torch pointing down.(p 98–99) Sometimes Cautes and Cautopates carry shepherds' crooks instead of torches. The event takes place in a cavern, into which Mithras has carried the bull, after having hunted it, ridden it and overwhelmed its strength.(p 74) Sometimes the cavern is surrounded by a circle, on which the twelve signs of the zodiac appear. Outside the cavern, top left, is Sol the sun, with his flaming crown, often driving a quadriga. A ray of light often reaches down to touch Mithras. At the top right is Luna, with her crescent moon, who may be depicted driving a biga. In some depictions, the central tauroctony is framed by a series of subsidiary scenes to the left, top and right, illustrating events in the Mithras narrative; Mithras being born from the rock, the water miracle, the hunting and riding of the bull, meeting Sol who kneels to him, shaking hands with Sol and sharing a meal of bull-parts with him, and ascending to the heavens in a chariot. In some instances, as is the case in the stucco icon at Santa Prisca Mithraeum in Rome, the god is shown heroically nude.[s] Some of these reliefs were constructed so that they could be turned on an axis. On the reverse was another, more elaborate feasting scene. This indicates that the bull killing scene was used in the first part of the celebration, then the relief was turned, and the second scene was used in the second part of the celebration. Besides the main cult icon, a number of mithraea had several secondary tauroctonies, and some small portable versions, probably meant for private devotion, have also been found. The second most important scene after the tauroctony in Mithraic art is the so-called banquet scene.(pp 286–287) The banquet scene features Mithras and Sol Invictus banqueting on the hide of the slaughtered bull.(pp 286–287) On the specific banquet scene on the Fiano Romano relief, one of the torchbearers points a caduceus towards the base of an altar, where flames appear to spring up. Robert Turcan has argued that since the caduceus is an attribute of Mercury, and in mythology Mercury is depicted as a psychopomp, the eliciting of flames in this scene is referring to the dispatch of human souls and expressing the Mithraic doctrine on this matter. Turcan also connects this event to the tauroctony: The blood of the slain bull has soaked the ground at the base of the altar, and from the blood the souls are elicited in flames by the caduceus. Mithras is depicted as being born from a rock. He is often shown as emerging from a rock, already in his youth, with a dagger in one hand and a torch in the other. He is nude, standing with his legs together, and is wearing a Phrygian cap. In some variations, he is shown coming out of the rock as a child, and in one holds a globe in one hand; sometimes a thunderbolt is seen. There are also depictions in which flames are shooting from the rock and also from Mithras' cap. One statue had its base perforated so that it could serve as a fountain, and the base of another has the mask of a water god. Sometimes Mithras also has other weapons such as bows and arrows, and there are also animals such as dogs, serpents, dolphins, eagles, other birds, lions, crocodiles, lobsters and snails around. On some reliefs, there is a bearded figure identified as the water god Oceanus, and on some there are the gods of the four winds. In these reliefs, the four elements could be invoked together. Sometimes Victoria, Luna, Sol, and Saturn also seem to play a role. Saturn in particular is often seen handing over the dagger or short sword to Mithras, used later in the tauroctony. In some depictions, Cautes and Cautopates are also present; sometimes they are depicted as shepherds. On some occasions, an amphora is seen, and a few instances show variations like an egg birth or a tree birth. Some interpretations show that the birth of Mithras was celebrated by lighting torches or candles. One of the most characteristic and poorly-understood features of the Mysteries is the naked lion-headed figure often found in Mithraic temples, named by the modern scholars with descriptive terms such as leontocephaline (lion-headed) or leontocephalus (lion-head). His body is a naked man's, entwined by a serpent (or two serpents, like a caduceus), with the snake's head often resting on the lion's head. The lion's mouth is often open. He is usually represented as having four wings, two keys (sometimes a single key), and a sceptre in his hand. Sometimes the figure is standing on a globe inscribed with a diagonal cross. On the figure from the Ostia Antica Mithraeum (left, CIMRM 312), the four wings carry the symbols of the four seasons, and a thunderbolt is engraved on his chest. At the base of the statue are the hammer and tongs of Vulcan and Mercury's cock and wand (caduceus). A rare variation of the same figure is also found with a human head and a lion's head emerging from its chest. Although animal-headed figures are prevalent in contemporary Egyptian and Gnostic mythological representations, no exact parallel to the Mithraic leontocephaline figure has been found. Based on dedicatory inscriptions for altars, the name of the figure is conjectured to be Arimanius, a Latinized form of the name Ahriman[t] – perplexingly, a demonic figure in the Zoroastrian pantheon. Arimanius is known from inscriptions to have been a god in the Mithraic cult as seen, for example, in images from the Corpus Inscriptionum et Monumentorum Religionis Mithriacae (CIMRM) such as CIMRM 222 from Ostia, CIMRM 369 from Rome, and CIMRM 1773 and 1775 from Pannonia. Some scholars identify the lion-man as Aion, or Zurvan, or Cronus, or Chronos, while others assert that it is a version of the Zoroastrian Ahriman or the more benign Vedic Aryaman.[u] Although the exact identity of the lion-headed figure is debated by scholars, it is largely agreed that the god is associated with time and seasonal change.(p 94) Rituals and worship According to M.J. Vermaseren and C.C. van Essen, the Mithraic New Year and the birthday of Mithras was on 25 December.[v][w] Beck disagreed strongly.(p 299, note 12) Clauss states: "The Mithraic Mysteries had no public ceremonies of its own. The festival of Natalis Invicti, held on 25 December, was a general festival of the Sun, and by no means specific to the Mysteries of Mithras." Mithraic initiates were required to swear an oath of secrecy and dedication. Mithras was thought to be a "warrior hero" similar to Greek heroes. Apparently, some grade rituals involved the recital of a catechism, wherein the initiate was asked a series of questions pertaining to the initiation symbolism and had to reply with specific answers. An example of such a catechism, apparently pertaining to the Leo grade, was discovered in a fragmentary Egyptian papyrus (Papyrus Berolinensis 21196), and reads: Almost no Mithraic scripture or first-hand account of its rituals survives;[o] with the exception of the aforementioned oath and catechism, and the document known as the Mithras Liturgy, from 4th century Egypt, whose status as a Mithraist text has been questioned by scholars including Franz Cumont.[x] The walls of mithraea were commonly whitewashed, and where this survives, it tends to carry extensive repositories of graffiti; and these, together with inscriptions on Mithraic monuments, form the main source for Mithraic texts. The archaeology of numerous mithraea indicates that most rituals were associated with feasting – as eating utensils and food residues are often found. These tend to include both animal bones and also very large quantities of fruit residues.(p 115) The presence of large numbers of cherry-stones in particular would tend to confirm mid-summer (late June, early July) as a season especially associated with Mithraic festivities. The Virunum album, in the form of an inscribed bronze plaque, records a Mithraic festival of commemoration as taking place on 26 June 184. Beck argues that religious celebrations on this date are indicative of special significance being given to the summer solstice; but this time of the year coincides with ancient recognition of the solar maximum at midsummer, when iconographically identical holidays such as Fors Fortuna (ancient Rome), Saint John's Eve, and Jāņi (Lithuania) are also observed. For their feasts, Mithraic initiates reclined on stone benches arranged along the longer sides of the mithraeum – typically there might be room for 15 to 30 diners, but very rarely many more than 40 men.(p 43) Counterpart dining rooms, or triclinia, were to be found above ground in the precincts of almost any temple or religious sanctuary in the Roman empire, and such rooms were commonly used for their regular feasts by Roman 'clubs', or collegia. Mithraic feasts probably performed a very similar function for Mithraists as the collegia did for those entitled to join them; indeed, since qualification for Roman collegia tended to be restricted to particular families, localities or traditional trades, Mithraism may have functioned in part as providing clubs for the unclubbed. The size of the mithraeum is not necessarily an indication of the size of the congregation.(pp 12, 36) Each mithraeum had several altars at the further end, underneath the representation of the tauroctony, and also commonly contained considerable numbers of subsidiary altars, both in the main mithraeum chamber and in the ante-chamber or narthex.(p 49) These altars, which are of the standard Roman pattern, each carry a named dedicatory inscription from a particular initiate, who dedicated the altar to Mithras "in fulfillment of his vow", in gratitude for favours received. Burned residues of animal entrails are commonly found on the main altars, indicating regular sacrificial use, though mithraea do not commonly appear to have been provided with facilities for ritual slaughter of sacrificial animals (a highly specialised function in Roman religion), and it may be presumed that a mithraeum would have made arrangements for this service to be provided for them in co-operation with the professional victimarius(p 568) of the civic cult. Prayers were addressed to the Sun three times a day, and Sunday was especially sacred. It is doubtful whether Mithraism had a monolithic and internally consistent doctrine.[y] It may have varied from location to location.(p 16) The iconography is relatively coherent. It had no predominant sanctuary or cultic centre; and, although each mithraeum had its own officers and functionaries, there was no central supervisory authority. In some mithraea, such as that at Dura Europos, wall paintings depict prophets carrying scrolls, but no named Mithraic sages are known, nor does any reference give the title of any Mithraic scripture or teaching. It is known that initiates could transfer with their grades from one Mithraeum to another.(p 139) Temples of Mithras are sunk below ground, windowless, and very distinctive. In cities, the basement of an apartment block might be converted; elsewhere they might be excavated and vaulted over, or converted from a natural cave. Mithraic temples are common in the empire; although unevenly distributed, with considerable numbers found in Rome, Ostia, Numidia, Dalmatia, Britain and along the Rhine/Danube frontier, while being somewhat less common in Greece, Egypt, and Syria.(pp 26–27) According to Walter Burkert, the secret character of Mithraic rituals meant that Mithraism could only be practiced within a Mithraeum. Some new finds at Tienen show evidence of large-scale feasting and suggest that the mystery religion may not have been as secretive as was generally believed.[z] For the most part, mithraea tend to be small, externally undistinguished, and cheaply constructed; the cult generally preferring to create a new centre rather than expand an existing one. The mithraeum represented the cave to which Mithras carried and then killed the bull; and where stone vaulting could not be afforded, the effect would be imitated with lath and plaster. They are commonly located close to springs or streams; fresh water appears to have been required for some Mithraic rituals, and a basin is often incorporated into the structure.(p 73) There is usually a narthex or ante-chamber at the entrance, and often other ancillary rooms for storage and the preparation of food. The extant mithraea present us with actual physical remains of the architectural structures of the sacred spaces of the Mithraic cult. Mithraeum is a modern coinage and mithraists referred to their sacred structures as speleum or antrum (cave), crypta (underground hallway or corridor), fanum (sacred or holy place), or even templum (a temple or a sacred space).[aa] In their basic form, mithraea were entirely different from the temples and shrines of other cults. In the standard pattern of Roman religious precincts, the temple building functioned as a house for the god, who was intended to be able to view, through the opened doors and columnar portico, sacrificial worship being offered on an altar set in an open courtyard – potentially accessible not only to initiates of the cult, but also to colitores or non-initiated worshippers.(p 493) Mithraea were the antithesis of this.(p 355) In the Suda under the entry Mithras, it states that "No one was permitted to be initiated into them (the mysteries of Mithras), until he should show himself holy and steadfast by undergoing several graduated tests." Gregory Nazianzen refers to the "tests in the mysteries of Mithras". There were seven grades of initiation into Mithraism, which are listed by St. Jerome. Manfred Clauss states that the number of grades, seven, must be connected to the planets. A mosaic in the Mithraeum of Felicissimus, Ostia Antica depicts these grades, with symbolic emblems that are connected either to the grades or are symbols of the planets. The grades also have an inscription beside them commending each grade into the protection of the different planetary gods.: 132–133 In ascending order of importance, the initiatory grades were:(p 133–138) Elsewhere, as at Dura-Europos, Mithraic graffiti survive giving membership lists, in which initiates of a mithraeum are named with their Mithraic grades. At Virunum, the membership list or album sacratorum was maintained as an inscribed plaque, updated year by year as new members were initiated. By cross-referencing these lists it is possible to track some initiates from one mithraeum to another; and also speculatively to identify Mithraic initiates with persons on other contemporary lists such as military service rolls and lists of devotees of non-Mithraic religious sanctuaries. Names of initiates are also found in the dedication inscriptions of altars and other cult objects. Clauss noted in 1990 that overall, only about 14% of Mithraic names inscribed before 250 CE identify the initiate's grade – and hence questioned the traditional view that all initiates belonged to one of the seven grades. Clauss argues that the grades represented a distinct class of priests, sacerdotes. Gordon maintains the former theory of Merkelbach and others, especially noting such examples as Dura where all names are associated with a Mithraic grade. Some scholars maintain that practice may have differed over time, or from one Mithraeum to another. The highest grade, pater, is by far the most common one found on dedications and inscriptions – and it would appear not to have been unusual for a mithraeum to have several men with this grade. The form pater patrum (father of fathers) is often found, which appears to indicate the pater with primary status. There are several examples of persons, commonly those of higher social status, joining a mithraeum with the status pater – especially in Rome during the 'pagan revival' of the 4th century. It has been suggested that some mithraea may have awarded honorary pater status to sympathetic dignitaries. The initiate into each grade appears to have been required to undertake a specific ordeal or test,(p 103) involving exposure to heat, cold or threatened peril. An 'ordeal pit', dating to the early 3rd century, has been identified in the mithraeum at Carrawburgh. Accounts of the cruelty of the emperor Commodus describes his amusing himself by enacting Mithraic initiation ordeals in homicidal form. By the later 3rd century, the enacted trials appear to have been abated in rigor, as 'ordeal pits' were floored over. Admission into the community was completed with a handshake with the pater, just as Mithras and Sol shook hands. The initiates were thus referred to as syndexioi (those united by the handshake). The term is used in an inscription by Proficentius[b] and derided by Firmicus Maternus in De errore profanarum religionum, a 4th century Christian work attacking paganism. In ancient Iran, taking the right hand was the traditional way of concluding a treaty or signifying some solemn understanding between two parties. Activities of the most prominent deities in Mithraic scenes, Sol and Mithras, were imitated in rituals by the two most senior officers in the cult's hierarchy, the Pater and the Heliodromus.(p 288–289) The initiates held a sacramental banquet, replicating the feast of Mithras and Sol.(p 288–289) Reliefs on a cup found in Mainz appear to depict a Mithraic initiation. On the cup, the initiate is depicted as being led into a location where a Pater would be seated in the guise of Mithras with a drawn bow. Accompanying the initiate is a mystagogue, who explains the symbolism and theology to the initiate. The Rite is thought to re-enact what has come to be called the 'Water Miracle', in which Mithras fires a bolt into a rock, and from the rock now spouts water. Roger Beck has hypothesized a third processional Mithraic ritual, based on the Mainz cup and Porphyrys. This scene, called 'Procession of the Sun-Runner', shows the Heliodromus escorted by two figures representing Cautes and Cautopates (see below) and preceded by an initiate of the grade Miles leading a ritual enactment of the solar journey around the mithraeum, which was intended to represent the cosmos. Consequently, it has been argued that most Mithraic rituals involved a re-enactment by the initiates of episodes in the Mithras narrative,(pp 62–101) a narrative whose main elements were: birth from the rock, striking water from stone with an arrow shot, the killing of the bull, Sol's submission to Mithras, Mithras and Sol feasting on the bull, the ascent of Mithras to heaven in a chariot. A noticeable feature of this narrative (and of its regular depiction in surviving sets of relief carvings) is the absence of female personages (the sole exception being Luna watching the tauroctony in the upper corner opposite Helios, and the presumable presence of Venus as patroness of the nymphus grade).(p 33) Only male names appear in surviving inscribed membership lists. Historians including Richard Gordon have concluded the cult was for men only.[ab][ac] The ancient scholar Porphyry refers to female initiates in Mithraic rites.[ad] The early 20th-century historian A.S. Geden wrote that this may be due to a misunderstanding. According to Geden, while the participation of women in the ritual was not unknown in the Eastern cults, the predominant military influence in Mithraism makes it unlikely in this instance. It has recently been suggested by David Jonathan that "Women were involved with Mithraic groups in at least some locations of the empire."(p 121) Soldiers were strongly represented amongst Mithraists, and also merchants, customs officials and minor bureaucrats. Few, if any, initiates came from leading aristocratic or senatorial families until the 'pagan revival' of the mid-4th century; but there were always considerable numbers of freedmen and slaves.(p 39) Clauss suggests that a statement by Porphyry, that people initiated into the Lion grade must keep their hands pure from everything that brings pain and harm and is impure, means that moral demands were made upon members of congregations.[ae] A passage in the Caesares of Julian the Apostate refers to "commandments of Mithras".[af] Tertullian, in his treatise "On the Military Crown" records that Mithraists in the army were officially excused from wearing celebratory coronets on the basis of the Mithraic initiation ritual that included refusing a proffered crown, because "their only crown was Mithras". History and development According to the archaeologist Maarten Vermaseren, evidence from Commagene from the 1st century BCE demonstrates the "reverence paid to Mithras" but does not refer to "the mysteries".[ag] In the colossal statuary erected by King Antiochus I (69–34 BCE) at Mount Nemrut, Mithras is shown beardless, wearing a Phrygian cap (or the similar headdress – a Persian tiara), in Iranian (Parthian) clothing, and was originally seated on a throne alongside other deities and the king himself. On the back of the thrones there is an inscription in Greek, which includes the compound name Apollo-Mithras-Helios in the genitive case (Ἀπόλλωνος Μίθρου Ἡλίου). Vermaseren also reports about a Mithras cult in Fayum in the 3rd century BCE.(p 467) R.D. Barnett has argued that the royal seal of King Saussatar of the Mitanni from c. 1450 BCE depicts a tauroctonous Mithras.[ah] The origins and spread of the Mysteries have been intensely debated among scholars and there are radically differing views on these issues. According to Clauss, mysteries of Mithras were not practiced until the 1st century CE. According to Ulansey, the earliest evidence for the Mithraic mysteries places their appearance in the middle of the 1st century BCE: The historian Plutarch says that in 67 BCE the pirates of Cilicia (a province on the southeastern coast of Asia Minor, that provided sea access to adjacent Commagene) were practicing "secret rites" of Mithras.[ai] According to C.M. Daniels, whether any of this relates to the origins of the mysteries is unclear.[aj] The unique underground temples or mithraea appear suddenly in the archaeology in the last quarter of the 1st century CE.(p 118) Inscriptions and monuments related to the Mithraic Mysteries are catalogued in a two volume work by Maarten J. Vermaseren, the Corpus Inscriptionum et Monumentorum Religionis Mithriacae (or CIMRM). The earliest monument showing Mithras slaying the bull is thought to be CIMRM 593, found in Rome. There is no date, but the inscription tells us that it was dedicated by a certain Alcimus, steward of T. Claudius Livianus. Vermaseren and Gordon believe that this Livianus is a certain Livianus who was commander of the Praetorian guard in 101 CE, which would give an earliest date of 98–99 CE. Five small terracotta plaques of a figure holding a knife over a bull have been excavated near Kerch in the Crimea, dated by Beskow and Clauss to the second half of the 1st century BCE,[ak] and by Beck to 50 BCE – 50 CE. These may be the earliest tauroctonies, if they are accepted to be a depiction of Mithras.[al] The bull-slaying figure wears a Phrygian cap, but is described by Beck and Beskow as otherwise unlike standard depictions of the tauroctony. Another reason for not connecting these artifacts with the Mithraic Mysteries is that the first of these plaques was found in a woman's tomb.[am] An altar or block from near SS. Pietro e Marcellino on the Esquiline in Rome was inscribed with a bilingual inscription by an Imperial freedman named T. Flavius Hyginus, probably between 80 and 100 CE. It is dedicated to Sol Invictus Mithras.[an] CIMRM 2268 is a broken base or altar from Novae/Steklen in Moesia Inferior, dated 100 CE, showing Cautes and Cautopates. Other early archaeology includes the Greek inscription from Venosia by Sagaris actor probably from 100–150 CE; the Sidon cippus dedicated by Theodotus priest of Mithras to Asclepius, 140–141 CE; and the earliest military inscription, by C. Sacidius Barbarus, centurion of XV Apollinaris, from the bank of the Danube at Carnuntum, probably before 114 CE.(p 150) According to C.M. Daniels, the Carnuntum inscription is the earliest Mithraic dedication from the Danube region, which along with Italy is one of the two regions where Mithraism first struck root.[ao] The earliest dateable mithraeum outside Rome dates from 148 CE.[ap] The Mithraeum at Caesarea Maritima is the only one in Palestine and the date is inferred.[aq] Excavations at Inveresk, Scotland in 2010 found two well-preserved altars to Mithras dated to 140 CE. The altars are believed to be from the Roman Empire's most northerly temple to Mithras. According to Roger Beck, the attested locations of the Roman cult in the earliest phase (c. 80–120 CE) are as follows:(pp 34–35) Mithraea datable from pottery Datable dedications According to Boyce, the earliest literary references to the mysteries are by the Latin poet Statius, about 80 CE, and Plutarch (c. 100 CE).[ar] The Thebaid (c. 80 CE(p 29)) an epic poem by Statius, pictures Mithras in a cave, wrestling with something that has horns. The context is a prayer to the god Phoebus. The cave is described as persei, which in this context is usually translated Persian. According to the translator J.H. Mozley it literally means Persean, referring to Perses, the son of Perseus and Andromeda,(p 29) this Perses being the ancestor of the Persians according to Greek legend.(pp 27–29) Writing in approximately 145 CE, the early Christian apologist Justin Martyr charges the cult of Mithras with imitating the Christian communion, The Greek biographer Plutarch (46–127 CE) says that "secret mysteries ... of Mithras" were practiced by the pirates of Cilicia, the coastal province in the southeast of Anatolia, who were active in the 1st century BCE: "They likewise offered strange sacrifices; those of Olympus I mean; and they celebrated certain secret mysteries, among which those of Mithras continue to this day, being originally instituted by them." He mentions that the pirates were especially active during the Mithridatic wars (between the Roman Republic and King Mithridates VI of Pontus) in which they supported the king. The association between Mithridates and the pirates is also mentioned by the ancient historian Appian. The 4th century commentary on Vergil by Servius says that Pompey settled some of these pirates in Calabria in southern Italy. The historian Dio Cassius (2nd to 3rd century CE) tells how the name of Mithras was spoken during the state visit to Rome of Tiridates I of Armenia, during the reign of Nero. (Tiridates was the son of Vonones II of Parthia, and his coronation by Nero in 66 CE confirmed the end of a war between Parthia and Rome.) Dio Cassius writes that Tiridates, as he was about to receive his crown, told the Roman emperor that he revered him "as Mithras". Roger Beck thinks it possible that this episode contributed to the emergence of Mithraism as a popular religion in Rome.[as] The philosopher Porphyry (3rd–4th century CE) gives an account of the origins of the Mysteries in his work De antro nympharum (The Cave of the Nymphs). Citing Eubulus as his source, Porphyry writes that the original temple of Mithras was a natural cave, containing fountains, which Zoroaster found in the mountains of Persia. To Zoroaster, this cave was an image of the whole world, so he consecrated it to Mithras, the creator of the world. Later in the same work, Porphyry links Mithras and the bull with planets and star-signs: Mithras himself is associated with the sign of Aries and the planet Mars, while the bull is associated with Venus.[at] Porphyry is writing close to the demise of the cult, and Robert Turcan has challenged the idea that Porphyry's statements about Mithraism are accurate. His case is that far from representing what Mithraists believed, they are merely representations by the Neoplatonists of what it suited them in the late 4th century to read into the mysteries. Merkelbach & Beck believed Porphyry's work "is in fact thoroughly coloured with the doctrines of the Mysteries".(p 308 note 37) Beck holds that classical scholars have neglected Porphyry's evidence and have taken an unnecessarily skeptical view of Porphyry. According to Beck, Porphyry's De antro is the only clear text from antiquity which tells us about the intent of the Mithraic mysteries and how that intent was realized.[au] David Ulansey finds it important that Porphyry "confirms ... that astral conceptions played an important role in Mithraism."(p 18) In later antiquity, the Greek name of Mithras (Μίθρας) occurs in the text known as the "Mithras Liturgy", a part of the Paris Greek Magical Papyrus here Mithras is given the epithet "the great god", and is identified with the sun god Helios. There have been different views among scholars as to whether this text is an expression of Mithraism as such. Franz Cumont argued that it is not;(p 12) Marvin Meyer thinks it is;(pp 180–182) while Hans Dieter Betz sees it as a synthesis of Greek, Egyptian, and Mithraic traditions. Scholarship on Mithras begins with Franz Cumont, who published a two volume collection of source texts and images of monuments in French in 1894–1900, Textes et monuments figurés relatifs aux mystères de Mithra [French: Texts and Illustrated Monuments Relating to the Mysteries of Mithra]. An English translation of part of this work was published in 1903, with the title The Mysteries of Mithra. Cumont's hypothesis, as the author summarizes it in the first 32 pages of his book, was that the Roman religion was "the Roman form of Mazdaism",(p 298) the Persian state religion, disseminated from the East. He identified the ancient Aryan deity who appears in Persian literature as Mithras with the Hindu god Mitra of the Vedic hymns. According to Cumont, the god Mithra came to Rome "accompanied by a large representation of the Mazdean Pantheon." Cumont considers that while the tradition "underwent some modification in the Occident ... the alterations that it suffered were largely superficial." Cumont's theories came in for severe criticism from John R. Hinnells and R.L. Gordon at the First International Congress of Mithraic Studies held in 1971.[av] John Hinnells was unwilling to reject entirely the idea of Iranian origin, but wrote: "we must now conclude that his reconstruction simply will not stand. It receives no support from the Iranian material and is in fact in conflict with the ideas of that tradition as they are represented in the extant texts. Above all, it is a theoretical reconstruction which does not accord with the actual Roman iconography."[aw] He discussed Cumont's reconstruction of the bull-slaying scene and stated "that the portrayal of Mithras given by Cumont is not merely unsupported by Iranian texts but is actually in serious conflict with known Iranian theology."[ax] Another paper by R.L. Gordon argued that Cumont severely distorted the available evidence by forcing the material to conform to his predetermined model of Zoroastrian origins. Gordon suggested that the theory of Persian origins was completely invalid and that the Mithraic mysteries in the West were an entirely new creation. A similar view has been expressed by Luther H. Martin: "Apart from the name of the god himself, in other words, Mithraism seems to have developed largely in and is, therefore, best understood from the context of Roman culture."(p xiv) According to Hopfe, "All theories of the origin of Mithraism acknowledge a connection, however vague, to the Mithra/Mitra figure of ancient Aryan religion." Reporting on the Second International Congress of Mithraic Studies, 1975, Ugo Bianchi says that although he welcomes "the tendency to question in historical terms the relations between Eastern and Western Mithraism", it "should not mean obliterating what was clear to the Romans themselves, that Mithras was a 'Persian' (in wider perspective: an Indo-Iranian) god." Boyce wrote, "no satisfactory evidence has yet been adduced to show that, before Zoroaster, the concept of a supreme god existed among the Iranians, or that among them Mithra – or any other divinity – ever enjoyed a separate cult of his or her own outside either their ancient or their Zoroastrian pantheons." She also said that although recent studies have minimized the Iranizing aspects of the self-consciously Persian religion "at least in the form which it attained under the Roman Empire", the name Mithras is enough to show "that this aspect is of some importance." She also says that "the Persian affiliation of the Mysteries is acknowledged in the earliest literary references to them." Beck tells us that since the 1970s scholars have generally rejected Cumont, but adds that recent theories about how Zoroastrianism was during the period BCE now make some new form of Cumont's east–west transfer possible.[ay] He says that ... an indubitable residuum of things Persian in the Mysteries and a better knowledge of what constituted actual Mazdaism have allowed modern scholars to postulate for Roman Mithraism a continuing Iranian theology. This indeed is the main line of Mithraic scholarship, the Cumontian model which subsequent scholars accept, modify, or reject. For the transmission of Iranian doctrine from East to West, Cumont postulated a plausible, if hypothetical, intermediary: the Magusaeans of the Iranian diaspora in Anatolia. More problematic – and never properly addressed by Cumont or his successors – is how real-life Roman Mithraists subsequently maintained a quite complex and sophisticated Iranian theology behind an occidental facade. Other than the images at Dura of the two 'magi' with scrolls, there is no direct and explicit evidence for the carriers of such doctrines. ... Up to a point, Cumont's Iranian paradigm, especially in Turcan's modified form, is certainly plausible. He also says that "the old Cumontian model of formation in, and diffusion from, Anatolia ... is by no means dead – nor should it be." Beck theorizes that the cult was created in Rome, by a single founder who had some knowledge of both Greek and Oriental religion, but suggests that some of the ideas used may have passed through the Hellenistic kingdoms. He observes that "Mithras – moreover, a Mithras who was identified with the Greek Sun god Helios" was among the gods of the syncretic Greco-Armenian-Iranian royal cult at Nemrut, founded by Antiochus I of Commagene in the mid 1st century BCE. While proposing the theory, Beck says that his scenario may be regarded as Cumontian in two ways. Firstly, because it looks again at Anatolia and Anatolians, and more importantly, because it hews back to the methodology first used by Cumont. Merkelbach suggests that its mysteries were essentially created by a particular person or persons and created in a specific place, the city of Rome, by someone from an eastern province or border state who knew the Iranian myths in detail, which he wove into his new grades of initiation; but that he must have been Greek and Greek-speaking because he incorporated elements of Greek Platonism into it. The myths, he suggests, were probably created in the milieu of the imperial bureaucracy, and for its members. Clauss tends to agree. Beck calls this "the most likely scenario" and states "Until now, Mithraism has generally been treated as if it somehow evolved Topsy-like from its Iranian precursor – a most implausible scenario once it is stated explicitly."(pp 304, 306) Archaeologist Lewis M. Hopfe notes that there are only three mithraea in Roman Syria, in contrast to further west. He writes: "Archaeology indicates that Roman Mithraism had its epicenter in Rome ... the fully developed religion known as Mithraism seems to have begun in Rome and been carried to Syria by soldiers and merchants."[az] Taking a different view from other modern scholars, Ulansey argues that the Mithraic mysteries began in the Greco-Roman world as a religious response to the discovery by the Greek astronomer Hipparchus of the astronomical phenomenon of the precession of the equinoxes – a discovery that amounted to discovering that the entire cosmos was moving in a hitherto unknown way. This new cosmic motion, he suggests, was seen by the founders of Mithraism as indicating the existence of a powerful new god capable of shifting the cosmic spheres and thereby controlling the universe.(pp 77 ff) A. D. H. Bivar, L. A. Campbell, and G. Widengren have variously argued that Roman Mithraism represents a continuation of some form of Iranian Mithra worship. More recently, Parvaneh Pourshariati has made similar claims. According to Antonia Tripolitis, Roman Mithraism originated in Vedic India and picked up many features of the cultures which it encountered in its westward journey.[ba] The first important expansion of the mysteries in the Empire seems to have happened quite quickly, late in the reign of Antoninus Pius (b. 121 CE, d. 161 CE) and under Marcus Aurelius. By this time all the key elements of the mysteries were in place.[bb] Mithraism reached the apogee of its popularity during the 2nd and 3rd centuries, spreading at an "astonishing" rate at the same period when the worship of Sol Invictus was incorporated into the state-sponsored cults.(p 299)[bc] At this period a certain Pallas devoted a monograph to Mithras, and a little later Euboulus wrote a History of Mithras, although both works are now lost. According to the 4th century Historia Augusta, the emperor Commodus participated in its mysteries but it never became one of the state cults.[bd] The historian Jacob Burckhardt writes: Mithras is the guide of souls which he leads from the earthly life into which they had fallen back up to the light from which they issued ... It was not only from the religions and the wisdom of Orientals and Egyptians, even less from Christianity, that the notion that life on earth was merely a transition to a higher life was derived by the Romans. Their own anguish and the awareness of senescence made it plain enough that earthly existence was all hardship and bitterness. Mithras-worship became one, and perhaps the most significant, of the religions of redemption in declining paganism. The religion and its followers faced persecution in the 4th century from Christianization, and Mithraism came to an end at some point between its last decade and the 5th century. Ulansey states that "Mithraism declined with the rise to power of Christianity, until the beginning of the fifth century, when Christianity became strong enough to exterminate by force rival religions such as Mithraism."[be] According to Speidel, Christians fought fiercely with this feared enemy and suppressed it during the late 4th century. Mithraic sanctuaries were destroyed and religion was no longer a matter of personal choice.[bf][bg] According to L.H. Martin, Roman Mithraism came to an end with the anti-pagan decrees of the Christian emperor Theodosius during the last decade of the 4th century.[bh] Clauss states that inscriptions show Mithras as one of the cults listed on inscriptions by Roman senators who had not converted to Christianity, as part of the "pagan revival" among the elite in the second half of the 4th century.[bi] Beck states that "Quite early in the [fourth] century the religion was as good as dead throughout the empire."(p 299) Archaeological evidence indicates the continuance of the cult of Mithras up until the end of the 4th century. In particular, large numbers of votive coins deposited by worshippers have been recovered at the Mithraeum at Pons Sarravi (Sarrebourg) in Gallia Belgica, in a series that runs from Gallienus (r. 253–268) to Theodosius I (r. 379–395). These were scattered over the floor when the mithraeum was destroyed, as Christians apparently regarded the coins as polluted; therefore, providing reliable dates for the functioning of the mithraeum up until near the end of the century.(pp 31–32) Franz Cumont states that Mithraism may have survived in certain remote cantons of the Alps and Vosges into the 5th century. According to Mark Humphries, the deliberate concealment of Mithraic cult objects in some areas suggests that precautions were being taken against Christian attacks. In areas like the Rhine frontier, barbarian invasions may have also played a role in the end of Mithraism. At some of the mithraeums that have been found below churches, such as the Santa Prisca Mithraeum and the San Clemente Mithraeum, the ground plan of the church above was made in a way to symbolize Christianity's domination of Mithraism. The cult disappeared earlier than that of Isis. Isis was still remembered in the Middle Ages as a pagan deity, but Mithras was already forgotten in late antiquity.(p 171) Interpretations of the bull-slaying scene According to Cumont, the imagery of the tauroctony was a Graeco-Roman representation of an event in Zoroastrian cosmogony described in a 9th-century Zoroastrian text, the Bundahishn. In this text the evil spirit Ahriman (not Mithra) slays the primordial creature Gavaevodata, which is represented as a bovine.[bj] Cumont held that a version of the myth must have existed in which Mithras, not Ahriman, killed the bovine. But according to Hinnells, no such variant of the myth is known, and that this is merely speculation: "In no known Iranian text [either Zoroastrian or otherwise] does Mithra slay a bull."(p 291) David Ulansey finds astronomical evidence from the mithraeum itself. He reminds us that the Platonic writer Porphyry wrote in the 3rd century CE that the cave-like temple Mithraea depicted "an image of the world"[bk] and that Zoroaster consecrated a cave resembling the world fabricated by Mithras.[bl] The ceiling of the Caesarea Maritima Mithraeum retains traces of blue paint, which may mean the ceiling was painted to depict the sky and the stars. Beck has given the following celestial composition of the Tauroctony: Several celestial identities for the Tauroctonous Mithras (TM) himself have been proposed. Beck summarizes them in the table below. Ulansey has proposed that Mithras seems to have been derived from the constellation of Perseus, which is positioned just above Taurus in the night sky. He sees iconographic and mythological parallels between the two figures: both are young heroes, carry a dagger, and wear a Phrygian cap. He also mentions the similarity of the image of Perseus killing the Gorgon and the tauroctony, both figures being associated with caverns and both having connections to Persia as further evidence.(pp 25–39) Michael Speidel associates Mithras with the constellation of Orion because of the proximity to Taurus, and the consistent nature of the depiction of the figure as having wide shoulders, a garment flared at the hem, and narrowed at the waist with a belt, thus taking on the form of the constellation. In opposition to the theories above, which link Mithras to specific constellations, Jelbert suggests that the deity represented the Milky Way. Jelbert argues that within the tauroctony image, Mithras' body is analogous to the path of the Milky Way that bridges Taurus and Scorpius, and that this bifurcated section mirrors the shape, scale and position of the deity relative to the other characters in the scene. The notion of Mithras as the Milky Way would have resonated with his status as god of light and lord of genesis, suggests Jelbert, due to the luminosity of this celestial feature, as well as the location of the traditional soul gates at Taurus-Gemini and Scorpius- Sagittarius, portals once believed to represent the points of entry for the soul at birth and death respectively. Beck has criticized Speidel and Ulansey of adherence to a literal cartographic logic, describing their theories as a "will-o'-the-wisp" that "lured them down a false trail". He argues that a literal reading of the tauroctony as a star chart raises two major problems: it is difficult to find a constellation counterpart for Mithras himself (despite efforts by Speidel and Ulansey) and that, unlike in a star chart, each feature of the tauroctony might have more than a single counterpart. Rather than seeing Mithras as a constellation, Beck argues that Mithras is the prime traveller on the celestial stage (represented by the other symbols of the scene), the Unconquered Sun moving through the constellations. But again, Meyer holds that the Mithras Liturgy reflects the world of Mithraism and may be a confirmation for Ulansey's theory of Mithras being held responsible for the precession of equinoxes.[bm] Peter Chrisp posits that the killing was of a "sacred bull" and that the "act [was] believed" to create the universe's life force and maintain it. Comparable belief systems The cult of Mithras was part of the syncretic nature of ancient Roman religion. Almost all Mithraea contain statues dedicated to gods of other cults, and it is common to find inscriptions dedicated to Mithras in other sanctuaries, especially those of Jupiter Dolichenus.(p 158) Mithraism was not an alternative to Rome's other traditional religions, but was one of many forms of religious practice, and many Mithraic initiates can also be found participating in the civic religion, and as initiates of other mystery cults. Early Christian apologists noted similarities between Mithraic and Christian rituals, but nonetheless took an extremely negative view of Mithraism: they interpreted Mithraic rituals as evil copies of Christian ones. For instance, Tertullian wrote that as a prelude to the Mithraic initiation ceremony, the initiate was given a ritual bath and at the end of the ceremony, received a mark on the forehead. This mark on the forehead may have likely been the Latin letter, "M", which stood for the name of their messianic god-king Mithras. Tertullian also described these rites as a diabolical counterfeit of the baptism and chrismation of Christians. Justin Martyr contrasted Mithraic initiation communion with the Eucharist: Ernest Renan suggested in 1882 that, under different circumstances, Mithraism might have risen to the prominence of modern-day Christianity. Renan wrote: "If the growth of Christianity had been arrested by some mortal malady, the world would have been Mithraic".[bn] This theory has since been contested. Leonard Boyle wrote in 1987 that "too much ... has been made of the 'threat' of Mithraism to Christianity", pointing out that there are only fifty known mithraea in the entire city of Rome. J.A. Ezquerra holds that since the two religions did not share similar aims, there was never any real threat of Mithraism taking over the Roman world.[bo] Mithraism had backing from the Roman aristocracy during a time when their conservative values were seen as under attack during the rising tides of Christianity. According to Mary Boyce, Mithraism was a potent enemy for Christianity in the West, though she is sceptical about its hold in the East.[bp] F. Coarelli (1979) has tabulated forty actual or possible Mithraea and estimated that Rome would have had "not less than 680–690" mithraea.[bq] L.M. Hopfe states that more than 400 Mithraic sites have been found. These sites are spread all over the Roman empire from places as far as Dura-Europos in the east, and England in the west. He, too, says that Mithraism may have been a rival of Christianity.[br] David Ulansey thinks Renan's statement "somewhat exaggerated",[bs] but does consider Mithraism "one of Christianity's major competitors in the Roman Empire".[bs] See also Footnotes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jews#cite_note-:33-37] | [TOKENS: 15852] |
Contents Jews Jews (Hebrew: יְהוּדִים, ISO 259-2: Yehudim, Israeli pronunciation: [jehuˈdim]), or the Jewish people, are an ethnoreligious group and nation, originating from the Israelites of ancient Israel and Judah. They traditionally adhere to Judaism. Jewish ethnicity, religion, and community are highly interrelated, as Judaism is an ethnic religion, though many ethnic Jews do not practice it. Religious Jews regard converts to Judaism as members of the Jewish nation, pursuant to the long-standing conversion process. The Israelites emerged from the pre-existing Canaanite peoples to establish Israel and Judah in the Southern Levant during the Iron Age. Originally, Jews referred to the inhabitants of the kingdom of Judah and were distinguished from the gentiles and the Samaritans. According to the Hebrew Bible, these inhabitants predominately originate from the tribe of Judah, who were descendants of Judah, the fourth son of Jacob. The tribe of Benjamin were another significant demographic in Judah and were considered Jews too. By the late 6th century BCE, Judaism had evolved from the Israelite religion, dubbed Yahwism (for Yahweh) by modern scholars, having a theology that religious Jews believe to be the expression of the Mosaic covenant between God and the Jewish people. After the Babylonian exile, Jews referred to followers of Judaism, descendants of the Israelites, citizens of Judea, or allies of the Judean state. Jewish migration within the Mediterranean region during the Hellenistic period, followed by population transfers, caused by events like the Jewish–Roman wars, gave rise to the Jewish diaspora, consisting of diverse Jewish communities that maintained their sense of Jewish history, identity, and culture. In the following millennia, Jewish diaspora communities coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (Central and Eastern Europe), the Sephardim (Iberian Peninsula), and the Mizrahim (Middle East and North Africa). While these three major divisions account for most of the world's Jews, there are other smaller Jewish groups outside of the three. Prior to World War II, the global Jewish population reached a peak of 16.7 million, representing around 0.7% of the world's population at that time. During World War II, approximately six million Jews throughout Europe were systematically murdered by Nazi Germany in a genocide known as the Holocaust. Since then, the population has slowly risen again, and as of 2021[update], was estimated to be at 15.2 million by the demographer Sergio Della Pergola or less than 0.2% of the total world population in 2012.[b] Today, over 85% of Jews live in Israel or the United States. Israel, whose population is 73.9% Jewish, is the only country where Jews comprise more than 2.5% of the population. Jews have significantly influenced and contributed to the development and growth of human progress in many fields, both historically and in modern times, including in science and technology, philosophy, ethics, literature, governance, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Jews founded Christianity and had an indirect but profound influence on Islam. In these ways and others, Jews have played a significant role in the development of Western culture. Name and etymology The term "Jew" is derived from the Hebrew word יְהוּדִי Yehudi, with the plural יְהוּדִים Yehudim. Endonyms in other Jewish languages include the Ladino ג׳ודיו Djudio (plural ג׳ודיוס, Djudios) and the Yiddish ייִד Yid (plural ייִדן Yidn). Though Genesis 29:35 and 49:8 connect "Judah" with the verb yada, meaning "praise", scholars generally agree that "Judah" most likely derives from the name of a Levantine geographic region dominated by gorges and ravines. The gradual ethnonymic shift from "Israelites" to "Jews", regardless of their descent from Judah, although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE) of the Tanakh. Some modern scholars disagree with the conflation, based on the works of Josephus, Philo and Apostle Paul. The English word "Jew" is a derivation of Middle English Gyw, Iewe. The latter was loaned from the Old French giu, which itself evolved from the earlier juieu, which in turn derived from judieu/iudieu which through elision had dropped the letter "d" from the Medieval Latin Iudaeus, which, like the New Testament Greek term Ioudaios, meant both "Jew" and "Judean" / "of Judea". The Greek term was a loan from Aramaic *yahūdāy, corresponding to Hebrew יְהוּדִי Yehudi. Some scholars prefer translating Ioudaios as "Judean" in the Bible since it is more precise, denotes the community's origins and prevents readers from engaging in antisemitic eisegesis. Others disagree, believing that it erases the Jewish identity of Biblical characters such as Jesus. Daniel R. Schwartz distinguishes "Judean" and "Jew". Here, "Judean" refers to the inhabitants of Judea, which encompassed southern Palestine. Meanwhile, "Jew" refers to the descendants of Israelites that adhere to Judaism. Converts are included in the definition. But Shaye J.D. Cohen argues that "Judean" is inclusive of believers of the Judean God and allies of the Judean state. Another scholar, Jodi Magness, wrote the term Ioudaioi refers to a "people of Judahite/Judean ancestry who worshipped the God of Israel as their national deity and (at least nominally) lived according to his laws." The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.), in Arabic, "Jude" in German, "judeu" in Portuguese, "Juif" (m.)/"Juive" (f.) in French, "jøde" in Danish and Norwegian, "judío/a" in Spanish, "jood" in Dutch, "żyd" in Polish etc., but derivations of the word "Hebrew" are also in use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian (Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish) is the origin of the word "Yiddish". According to The American Heritage Dictionary of the English Language, fourth edition (2000), It is widely recognized that the attributive use of the noun Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are now several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun. Identity Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is used.[better source needed] Generally, in modern secular usage, Jews include three groups: people who were born to a Jewish family regardless of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background or lineage who have formally converted to Judaism and therefore are followers of the religion. In the context of biblical and classical literature, Jews could refer to inhabitants of the Kingdom of Judah, or the broader Judean region, allies of the Judean state, or anyone that followed Judaism. Historical definitions of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions. These definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud, around 200 CE. Interpretations by Jewish sages of sections of the Tanakh – such as Deuteronomy 7:1–5, which forbade intermarriage between their Israelite ancestors and seven non-Israelite nations: "for that [i.e. giving your daughters to their sons or taking their daughters for your sons,] would turn away your children from following me, to serve other gods"[failed verification] – are used as a warning against intermarriage between Jews and gentiles. Leviticus 24:10 says that the son in a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra 10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. A popular theory is that the rape of Jewish women in captivity brought about the law of Jewish identity being inherited through the maternal line, although scholars challenge this theory citing the Talmudic establishment of the law from the pre-exile period. Another argument is that the rabbis changed the law of patrilineal descent to matrilineal descent due to the widespread rape of Jewish women by Roman soldiers. Since the anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first, the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim). Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent could not contract a legal marriage, offspring would follow the mother. Rabbi Rivon Krygier follows a similar reasoning, arguing that Jewish descent had formerly passed through the patrilineal descent and the law of matrilineal descent had its roots in the Roman legal system. Origins The prehistory and ethnogenesis of the Jews are closely intertwined with archaeology, biology, historical textual records, mythology, and religious literature. The ethnic origin of the Jews lie in the Israelites, a confederation of Iron Age Semitic-speaking tribes that inhabited a part of Canaan during the tribal and monarchic periods. Modern Jews are named after and also descended from the southern Israelite Kingdom of Judah. Gary A. Rendsburg links the early Canaanite nomadic pastoralists confederation to the Shasu known to the Egyptians around the 15th century BCE. According to the Hebrew Bible narrative, Jewish history begins with the Biblical patriarchs such as Abraham, his son Isaac, Isaac's son Jacob, and the Biblical matriarchs Sarah, Rebecca, Leah, and Rachel, who lived in Canaan. The twelve sons of Jacob subsequently gave birth to the Twelve Tribes. Jacob and his family migrated to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. Jacob's descendants were later enslaved until the Exodus, led by Moses. Afterwards, the Israelites conquered Canaan under Moses' successor Joshua, and went through the period of the Biblical judges after the death of Joshua. Through the mediation of Samuel, the Israelites were subject to a king, Saul, who was succeeded by David and then Solomon, after whom the United Monarchy ended and was split into a separate Kingdom of Israel and a Kingdom of Judah. The Kingdom of Judah is described as comprising the tribes of Judah, Benjamin and partially, Levi. They later assimilated remnants of other tribes who migrated there from the northern Kingdom of Israel. In the extra-biblical record, the Israelites become visible as a people between 1200 and 1000 BCE. There is well accepted archeological evidence referring to "Israel" in the Merneptah Stele, which dates to about 1200 BCE, and in the Mesha stele from 840 BCE. It is debated whether a period like that of the Biblical judges occurred and if there ever was a United Monarchy. There is further disagreement about the earliest existence of the Kingdoms of Israel and Judah and their extent and power. Historians agree that a Kingdom of Israel existed by c. 900 BCE,: 169–95 there is a consensus that a Kingdom of Judah existed by c. 700 BCE at least, and recent excavations in Khirbet Qeiyafa have provided strong evidence for dating the Kingdom of Judah to the 10th century BCE. In 587 BCE, Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged Jerusalem, destroyed the First Temple and deported parts of the Judahite population. Scholars disagree regarding the extent to which the Bible should be accepted as a historical source for early Israelite history. Rendsburg states that there are two approximately equal groups of scholars who debate the historicity of the biblical narrative, the minimalists who largely reject it, and the maximalists who largely accept it, with the minimalists being the more vocal of the two. Some of the leading minimalists reframe the biblical account as constituting the Israelites' inspiring national myth narrative, suggesting that according to the modern archaeological and historical account, the Israelites and their culture did not overtake the region by force, but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic—and later monotheistic—religion of Yahwism centered on Yahweh, one of the gods of the Canaanite pantheon. The growth of Yahweh-centric belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting them apart from other Canaanites. According to Dever, modern archaeologists have largely discarded the search for evidence of the biblical narrative surrounding the patriarchs and the exodus. According to the maximalist position, the modern archaeological record independently points to a narrative which largely agrees with the biblical account. This narrative provides a testimony of the Israelites as a nomadic people known to the Egyptians as belonging to the Shasu. Over time these nomads left the desert and settled on the central mountain range of the land of Canaan, in simple semi-nomadic settlements in which pig bones are notably absent. This population gradually shifted from a tribal lifestyle to a monarchy. While the archaeological record of the ninth century BCE provides evidence for two monarchies, one in the south under a dynasty founded by a figure named David with its capital in Jerusalem, and one in the north under a dynasty founded by a figure named Omri with its capital in Samaria. It also points to an early monarchic period in which these regions shared material culture and religion, suggesting a common origin. Archaeological finds also provide evidence for the later cooperation of these two kingdoms in their coalition against Aram, and for their destructions by the Assyrians and later by the Babylonians. Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle East, and that they share certain genetic traits with other Gentile peoples of the Fertile Crescent. The genetic composition of different Jewish groups shows that Jews share a common gene pool dating back four millennia, as a marker of their common ancestral origin. Despite their long-term separation, Jewish communities maintained their unique commonalities, propensities, and sensibilities in culture, tradition, and language. History The earliest recorded evidence of a people by the name of Israel appears in the Merneptah Stele, which dates to around 1200 BCE. The majority of scholars agree that this text refers to the Israelites, a group that inhabited the central highlands of Canaan, where archaeological evidence shows that hundreds of small settlements were constructed between the 12th and 10th centuries BCE. The Israelites differentiated themselves from neighboring peoples through various distinct characteristics including religious practices, prohibition on intermarriage, and an emphasis on genealogy and family history. In the 10th century BCE, two neighboring Israelite kingdoms—the northern Kingdom of Israel and the southern Kingdom of Judah—emerged. Since their inception, they shared ethnic, cultural, linguistic and religious characteristics despite a complicated relationship. Israel, with its capital mostly in Samaria, was larger and wealthier, and soon developed into a regional power. In contrast, Judah, with its capital in Jerusalem, was less prosperous and covered a smaller, mostly mountainous territory. However, while in Israel the royal succession was often decided by a military coup d'état, resulting in several dynasty changes, political stability in Judah was much greater, as it was ruled by the House of David for the whole four centuries of its existence. Scholars also describe Biblical Jews as a 'proto-nation', in the modern nationalist sense, comparable to classical Greeks, the Gauls and the British Celts. Around 720 BCE, Kingdom of Israel was destroyed when it was conquered by the Neo-Assyrian Empire, which came to dominate the ancient Near East. Under the Assyrian resettlement policy, a significant portion of the northern Israelite population was exiled to Mesopotamia and replaced by immigrants from the same region. During the same period, and throughout the 7th century BCE, the Kingdom of Judah, now under Assyrian vassalage, experienced a period of prosperity and witnessed a significant population growth. This prosperity continued until the Neo-Assyrian king Sennacherib devastated the region of Judah in response to a rebellion in the area, ultimately halting at Jerusalem. Later in the same century, the Assyrians were defeated by the rising Neo-Babylonian Empire, and Judah became its vassal. In 587 BCE, following a revolt in Judah, the Babylonian king Nebuchadnezzar II besieged and destroyed Jerusalem and the First Temple, putting an end to the kingdom. The majority of Jerusalem's residents, including the kingdom's elite, were exiled to Babylon. According to the Book of Ezra, the Persian Cyrus the Great ended the Babylonian exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple circa 521–516 BCE. As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehud Medinata), with a smaller territory and a reduced population. Judea was under control of the Achaemenids until the fall of their empire in c. 333 BCE to Alexander the Great. After several centuries under foreign imperial rule, the Maccabean Revolt against the Seleucid Empire resulted in an independent Hasmonean kingdom, under which the Jews once again enjoyed political independence for a period spanning from 110 to 63 BCE. Under Hasmonean rule the boundaries of their kingdom were expanded to include not only the land of the historical kingdom of Judah, but also the Galilee and Transjordan. In the beginning of this process the Idumeans, who had infiltrated southern Judea after the destruction of the First Temple, were converted en masse. In 63 BCE, Judea was conquered by the Romans. From 37 BCE to 6 CE, the Romans allowed the Jews to maintain some degree of independence by installing the Herodian dynasty as vassal kings. However, Judea eventually came directly under Roman control and was incorporated into the Roman Empire as the province of Judaea. The Jewish–Roman wars, a series of failed uprisings against Roman rule during the first and second centuries CE, had profound and devastating consequences for the Jewish population of Judaea. The First Jewish–Roman War (66–73/74 CE) culminated in the destruction of Jerusalem and the Second Temple, after which the significantly diminished Jewish population was stripped of political autonomy. A few generations later, the Bar Kokhba revolt (132–136 CE) erupted in response to Roman plans to rebuild Jerusalem as a Roman colony, and, possibly, to restrictions on circumcision. Its violent suppression by the Romans led to the near-total depopulation of Judea, and the demographic and cultural center of Jewish life shifted to Galilee. Jews were subsequently banned from residing in Jerusalem and the surrounding area, and the province of Judaea was renamed Syria Palaestina. These developments effectively ended Jewish efforts to restore political sovereignty in the region for nearly two millennia. Similar upheavals impacted the Jewish communities in the empire's eastern provinces during the Diaspora Revolt (115–117 CE), leading to the near-total destruction of Jewish diaspora communities in Libya, Cyprus and Egypt, including the highly influential community in Alexandria. The destruction of the Second Temple in 70 CE brought profound changes to Judaism. With the Temple's central place in Jewish worship gone, religious practices shifted towards prayer, Torah study (including Oral Torah), and communal gatherings in synagogues. Judaism also lost much of its sectarian nature.: 69 Two of the three main sects that flourished during the late Second Temple period, namely the Sadducees and Essenes, eventually disappeared, while Pharisaic beliefs became the foundational, liturgical, and ritualistic basis of Rabbinic Judaism, which emerged as the prevailing form of Judaism since late antiquity. The Jewish diaspora existed well before the destruction of the Second Temple in 70 CE and had been ongoing for centuries, with the dispersal driven by both forced expulsions and voluntary migrations. In Mesopotamia, a testimony to the beginnings of the Jewish community can be found in Joachin's ration tablets, listing provisions allotted to the exiled Judean king and his family by Nebuchadnezzar II, and further evidence are the Al-Yahudu tablets, dated to the 6th–5th centuries BCE and related to the exiles from Judea arriving after the destruction of the First Temple, though there is ample evidence for the presence of Jews in Babylonia even from 626 BCE. In Egypt, the documents from Elephantine reveal the trials of a community founded by a Persian Jewish garrison at two fortresses on the frontier during the 5th–4th centuries BCE, and according to Josephus the Jewish community in Alexandria existed since the founding of the city in the 4th century BCE by Alexander the Great. By 200 BCE, there were well established Jewish communities both in Egypt and Mesopotamia ("Babylonia" in Jewish sources) and in the two centuries that followed, Jewish populations were also present in Asia Minor, Greece, Macedonia, Cyrene, and, beginning in the middle of the first century BCE, in the city of Rome. Later, in the first centuries CE, as a result of the Jewish-Roman Wars, a large number of Jews were taken as captives, sold into slavery, or compelled to flee from the regions affected by the wars, contributing to the formation and expansion of Jewish communities across the Roman Empire as well as in Arabia and Mesopotamia. After the Bar Kokhba revolt, the Jewish population in Judaea—now significantly reduced— made efforts to recover from the revolt's devastating effects, but never fully regained its former strength. Between the second and fourth centuries CE, the region of Galilee emerged as the primary center of Jewish life in Syria Palaestina, experiencing both demographic growth and cultural development. It was during this period that two central rabbinic texts, the Mishnah and the Jerusalem Talmud, were composed. The Romans recognized the patriarchs—rabbinic sages such as Judah ha-Nasi—as representatives of the Jewish people, granting them a certain degree of autonomy. However, as the Roman Empire gave way to the Christianized Byzantine Empire under Constantine, Jews began to face persecution by both the Church and imperial authorities, Jews came to be persecuted by the church and the authorities, and many immigrated to communities in the diaspora. By the fourth century CE, Jews are believed to have lost their demographic majority in Syria Palaestina. The long-established Jewish community of Mesopotamia, which had been living under Parthian and later Sasanian rule, beyond the confines of the Roman Empire, became an important center of Jewish study as Judea's Jewish population declined. Estimates often place the Babylonian Jewish community of the 3rd to 7th centuries at around one million, making it the largest Jewish diaspora community of that period. Under the political leadership of the exilarch, who was regarded as a royal heir of the House of David, this community had an autonomous status and served as a place of refuge for the Jews of Syria Palaestina. A number of significant Talmudic academies, such as the Nehardea, Pumbedita, and Sura academies, were established in Mesopotamia, and many important Amoraim were active there. The Babylonian Talmud, a centerpiece of Jewish religious law, was compiled in Babylonia in the 3rd to 6th centuries. Jewish diaspora communities are generally described to have coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (initially in the Rhineland and France), the Sephardim (initially in the Iberian Peninsula), and the Mizrahim (Middle East and North Africa). Romaniote Jews, Tunisian Jews, Yemenite Jews, Egyptian Jews, Ethiopian Jews, Bukharan Jews, Mountain Jews, and other groups also predated the arrival of the Sephardic diaspora. During the same period, Jewish communities in the Middle East thrived under Islamic rule, especially in cities like Baghdad, Cairo, and Damascus. In Babylonia, from the 7th to 11th centuries the Pumbedita and Sura academies led the Arab and to an extent the entire Jewish world. The deans and students of said academies defined the Geonic period in Jewish history. Following this period were the Rishonim who lived from the 11th to 15th centuries. Like their European counterparts, Jews in the Middle East and North Africa also faced periods of persecution and discriminatory policies, with the Almohad Caliphate in North Africa and Iberia issuing forced conversion decrees, causing Jews such as Maimonides to seek safety in other regions. Despite experiencing repeated waves of persecution, Ashkenazi Jews in Western Europe worked in a variety of fields, making an impact on their communities' economy and societies. In Francia, for example, figures like Isaac Judaeus and Armentarius occupied prominent social and economic positions. Francia also witnessed the development of a sophisticated tradition of biblical commentary, as exemplified by Rashi and the tosafists. In 1144, the first documented blood libel occurred in Norwich, England, marking an escalation in the pattern of discrimination and violence that Jews had already been subjected to throughout medieval Europe. During the 12th and 13th centuries, Jews faced frequent antisemitic legislation - including laws prescribing distinctive dress - alongside segregation, repeated blood libels, pogroms, and massacres such as the Rhineland Massacres (1066). The Jews of the Holy Roman Empire were designated Servi camerae regis (“servants of the imperial chamber”) by Frederick II, a status that afforded limited protection while simultaneously entangling them in the political struggles between the emperor and the German principalities and cities. Persecution intensified during the Black Death in the mid-14th century, when Jews were accused of poisoning wells and many communities were destroyed. These pressures, combined with major expulsions such as that from England in 1290, gradually pushed Ashkenazi Jewish populations eastward into Poland, Lithuania, and Russia. One of the largest Jewish communities of the Middle Ages was in the Iberian Peninsula, which for a time contained the largest Jewish population in Europe. Iberian Jewry endured discrimination under the Visigoths but saw its fortunes improve under Umayyad rule and later the Taifa kingdoms. During this period, the Jews of Muslim Spain entered a "Golden Age" marked by achievements in Hebrew poetry and literature, religious scholarship, grammar, medicine and science, with leading figures including Hasdai ibn Shaprut, Judah Halevi, Moses ibn Ezra and Solomon ibn Gabirol. Jews also rose to high office, most notably Samuel ibn Naghrillah, a scholar and poet who served as grand vizier and military commander of Granada. The Golden Age ended with the rise of the radical Almoravid and Almohad dynasties, whose persecutions drove many Jews from Iberia (including Maimonides), together with the advancing Reconquista. In 1391, widespread pogroms swept across Spain, leaving thousands dead and forcing mass conversions. The Spanish Inquisition was later established to pursue, torture and execute conversos who continued to practice Judaism in secret, while public disputations were staged to discredit Judaism. In 1492, after the Reconquista, Isabella I of Castile and Ferdinand II of Aragon decreed the expulsion of all Jews who refused conversion, sending an estimated 200,000 into exile in Portugal, Italy, North Africa, and the Ottoman Empire. In 1497, Portugal's Jews, about 30,000, were formally ordered expelled but instead were forcibly converted to retain their economic role. In 1498, some 3,500 Jews were expelled from Navarre. Many converts outwardly adopted Christianity while secretly preserving Jewish practices, becoming crypto-Jews (also known as marranos or anusim), who remained targets of the various Inquisitions for centuries. Following the expulsions from Spain and Portugal in the 1490s, Jewish exiles dispersed across the Mediterranean, Europe, and North Africa. Many settled in the Ottoman Empire—which, replacing the Iberian Peninsula, became home to the world's largest Jewish population—where new communities developed in Anatolia, the Balkans, and the Land of Israel. Cities such as Istanbul and Thessaloniki grew into major Jewish centers, while in 16th-century Safed a flourishing spiritual life took shape. There, Solomon Alkabetz, Moses Cordovero, and Isaac Luria developed influential new schools of Kabbalah, giving powerful impetus to Jewish mysticism, and Joseph Karo composed the Shulchan Aruch, which became a cornerstone of Jewish law. In the 17th century, Portuguese conversos who returned to Judaism and engaged in trade and banking helped establish Amsterdam as a prosperous Jewish center, while also forming communities in cities such as Antwerp and London. This period also witnessed waves of messianic fervor, most notably the rise of the Sabbatean movement in the 1660s, led by Sabbatai Zvi of İzmir, which reverberated throughout the Jewish world. In Eastern Europe, Poland–Lithuania became the principal center of Ashkenazi Jewry, eventually becoming home to the largest Jewish population in the world. Jewish life flourished there from in the early modern era, supported by relative stability, economic opportunity, and strong communal institutions. The mid-17th century brought devastation with the Cossack uprisings in Ukraine, which reversed migration flows and sent refugees westward, yet Poland–Lithuania remained the demographic and cultural heartland of Ashkenazic Jewry. Following the partitions of Poland, most of its Jews came under Russian rule and were confined to the "Pale of Settlement." The 18th century also witnessed new religious and intellectual currents. Hasidism, founded by Baal Shem Tov, emphasized mysticism and piety, while its opponents, the Misnagdim ("opponents") led by the Vilna Gaon, defended rabbinic scholarship and tradition. In Western Europe, during the 1760s and 1770s, the Haskalah (Jewish Enlightenment) emerged in German-speaking lands, where figures such as Moses Mendelssohn promoted secular learning, vernacular literacy, and integration into European society. Elsewhere, Jews began to be re-admitted to Western Europe, including England, where Menasseh ben Israel petitioned Oliver Cromwell for their return. In the Americas, Jews of Sephardic descent first arrived as conversos in Spanish and Portuguese colonies, where many faced trial by Inquisition tribunals for "judaizing." A more durable presence began in Dutch Brazil, where Jews openly practiced their religion and established the first synagogues in the New World, before the Portuguese reconquest forced their dispersal to Amsterdam, the Caribbean, and North America. Sephardic communities took root in Curaçao, Suriname, Jamaica, and Barbados, later joined by Ashkenazi migrants. In North America, Jews were present from the mid-17th century, with New Amsterdam hosting the first organized congregation in 1654. By the time of the American Revolution, small communities in New York, Newport, Philadelphia, Savannah, and Charleston played an active role in the struggle for independence. In the late 19th century, Jews in Western Europe gradually achieved legal emancipation, though social acceptance remained limited by persistent antisemitism and rising nationalism. In Eastern Europe, particularly within the Russian Empire's Pale of Settlement, Jews faced mounting legal restrictions and recurring pogroms. From this environment emerged Zionism, a national revival movement originating in Central and Eastern Europe that sought to re-establish a Jewish polity in the Land of Israel as a means of returning the Jewish people to their ancestral homeland and ending centuries of exile and persecution. This led to waves of Jewish migration to Ottoman-controlled Palestine. Theodor Herzl, who is considered the father of political Zionism, offered his vision of a future Jewish state in his 1896 book Der Judenstaat (The Jewish State); a year later, he presided over the First Zionist Congress. The antisemitism that inflicted Jewish communities in Europe also triggered a mass exodus of 2.8 million Jews to the United States between 1881 and 1924. Despite this, some Jews of Europe and the United States were able to make great achievements in various fields of science and culture. Among the most influential from this period are Albert Einstein in physics, Sigmund Freud in psychology, Franz Kafka in literature, and Irving Berlin in music. Many Nobel Prize winners at this time were Jewish, as is still the case. When Adolf Hitler and the Nazi Party came to power in Germany in 1933, the situation for Jews deteriorated rapidly as a direct result of Nazi policies. Many Jews fled from Europe to Mandatory Palestine, the United States, and the Soviet Union as a result of racial anti-Semitic laws, economic difficulties, and the fear of an impending war. World War II started in 1939, and by 1941, Hitler occupied almost all of Europe. Following the German invasion of the Soviet Union in 1941, the Final Solution—an extensive, organized effort with an unprecedented scope intended to annihilate the Jewish people—began, and resulted in the persecution and murder of Jews in Europe and North Africa. In Poland, three million were murdered in gas chambers in all concentration camps combined, with one million at the Auschwitz camp complex alone. The Holocaust is the name given to this genocide, in which six million Jews in total were systematically murdered. Before and during the Holocaust, enormous numbers of Jews immigrated to Mandatory Palestine. In 1944, the Jewish insurgency in Mandatory Palestine began with the aim of gaining full independence from the United Kingdom. On 14 May 1948, upon the termination of the mandate, David Ben-Gurion declared the creation of the State of Israel, a Jewish and democratic state. Immediately afterwards, all neighboring Arab states invaded, and were resisted by the newly formed Israel Defense Forces. In 1949, the war ended and Israel started building its state and absorbing waves of Aliyah, granting citizenship to Jews all over the world via the Law of Return passed in 1950. However, both the Israeli–Palestinian conflict and wider Arab–Israeli conflict continue to this day. Culture The Jewish people and the religion of Judaism are strongly interrelated. Converts to Judaism have a status within the Jewish people equal to those born into it. However, converts who go on to practice no Judaism are likely to be viewed with skepticism. Mainstream Judaism does not proselytize, and conversion is considered a difficult task. A significant portion of conversions are undertaken by children of mixed marriages, or would-be or current spouses of Jews. The Hebrew Bible, a religious interpretation of the traditions and early history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54 percent of the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some sense characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, and still others from the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"), the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities of Asoristan, known to Jews as Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. Dialects of these same languages were also used by the Jews of Syria Palaestina at that time.[citation needed] For centuries, Jews worldwide have spoken the local or dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that became independent languages. Yiddish is the Judaeo-German language developed by Ashkenazi Jews who migrated to Central Europe. Ladino is the Judaeo-Spanish language developed by Sephardic Jews who migrated to the Iberian Peninsula. Due to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish languages of several communities, including Judaeo-Georgian, Judaeo-Arabic, Judaeo-Berber, Krymchak, Judaeo-Malayalam and many others, have largely fallen out of use. For over sixteen centuries Hebrew was used almost exclusively as a liturgical language, and as the language in which most books had been written on Judaism, with a few speaking only Hebrew on the Sabbath. Hebrew was revived as a spoken language by Eliezer ben Yehuda, who arrived in Palestine in 1881. It had not been used as a mother tongue since Tannaic times. Modern Hebrew is designated as the "State language" of Israel. Despite efforts to revive Hebrew as the national language of the Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century, most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language, but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and the State of Israel. In some places, the mother language of the Jewish community differs from that of the general population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews, but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan,[better source needed] as well as for Ashkenazic Jews in Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and the city of Tunis, while most North Africans continue to use Arabic or Berber as their mother tongue.[citation needed] There is no single governing body for the Jewish community, nor a single authority with responsibility for religious doctrine. Instead, a variety of secular and religious institutions at the local, national, and international levels lead various parts of the Jewish community on a variety of issues. Today, many countries have a Chief Rabbi who serves as a representative of that country's Jewry. Although many Hasidic Jews follow a certain hereditary Hasidic dynasty, there is no one commonly accepted leader of all Hasidic Jews. Many Jews believe that the Messiah will act a unifying leader for Jews and the entire world. A number of modern scholars of nationalism support the existence of Jewish national identity in antiquity. One of them is David Goodblatt, who generally believes in the existence of nationalism before the modern period. In his view, the Bible, the parabiblical literature and the Jewish national history provide the base for a Jewish collective identity. Although many of the ancient Jews were illiterate (as were their neighbors), their national narrative was reinforced through public readings. The Hebrew language also constructed and preserved national identity. Although it was not widely spoken after the 5th century BCE, Goodblatt states: the mere presence of the language in spoken or written form could invoke the concept of a Jewish national identity. Even if one knew no Hebrew or was illiterate, one could recognize that a group of signs was in Hebrew script. ... It was the language of the Israelite ancestors, the national literature, and the national religion. As such it was inseparable from the national identity. Indeed its mere presence in visual or aural medium could invoke that identity. Anthony D. Smith, an historical sociologist considered one of the founders of the field of nationalism studies, wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation [...] than perhaps anywhere else in the ancient world." He adds that this observation "must make us wary of pronouncing too readily against the possibility of the nation, and even a form of religious nationalism, before the onset of modernity." Agreeing with Smith, Goodblatt suggests omitting the qualifier "religious" from Smith's definition of ancient Jewish nationalism, noting that, according to Smith, a religious component in national memories and culture is common even in the modern era. This view is echoed by political scientist Tom Garvin, who writes that "something strangely like modern nationalism is documented for many peoples in medieval times and in classical times as well," citing the ancient Jews as one of several "obvious examples", alongside the classical Greeks and the Gaulish and British Celts. Fergus Millar suggests that the sources of Jewish national identity and their early nationalist movements in the first and second centuries CE included several key elements: the Bible as both a national history and legal source, the Hebrew language as a national language, a system of law, and social institutions such as schools, synagogues, and Sabbath worship. Adrian Hastings argued that Jews are the "true proto-nation", that through the model of ancient Israel found in the Hebrew Bible, provided the world with the original concept of nationhood which later influenced Christian nations. However, following Jerusalem's destruction in the first century CE, Jews ceased to be a political entity and did not resemble a traditional nation-state for almost two millennia. Despite this, they maintained their national identity through collective memory, religion and sacred texts, even without land or political power, and remained a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of Israel. Steven Weitzman suggests that Jewish nationalist sentiment in antiquity was encouraged because under foreign rule (Persians, Greeks, Romans) Jews were able to claim that they were an ancient nation. This claim was based on the preservation and reverence of their scriptures, the Hebrew language, the Temple and priesthood, and other traditions of their ancestors. Doron Mendels further observes that the Hasmonean kingdom, one of the few examples of indigenous statehood at its time, significantly reinforced Jewish national consciousness. The memory of this period of independence contributed to the persistent efforts to revive Jewish sovereignty in Judea, leading to the major revolts against Roman rule in the 1st and 2nd centuries CE. Demographics Within the world's Jewish population there are distinct ethnic divisions, most of which are primarily the result of geographic branching from an originating Israelite population, and subsequent independent evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World, often at great distances from one another, resulting in effective and often long-term isolation. During the millennia of the Jewish diaspora the communities would develop under the influence of their local environments: political, cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim are so named in reference to their geographical origins (their ancestors' culture coalesced in the Rhineland, an area historically referred to by Jews as Ashkenaz). Similarly, Sephardim (Sefarad meaning "Spain" in Hebrew) are named in reference their origins in Iberia. The diverse groups of Jews of the Middle East and North Africa are often collectively referred to as Sephardim together with Sephardim proper for liturgical reasons having to do with their prayer rites. A common term for many of these non-Spanish Jews who are sometimes still broadly grouped as Sephardim is Mizrahim (lit. 'easterners' in Hebrew). Nevertheless, Mizrahis and Sepharadim are usually ethnically distinct. Smaller groups include, but are not restricted to, Indian Jews such as the Bene Israel, Bnei Menashe, Cochin Jews, and Bene Ephraim; the Romaniotes of Greece; the Italian Jews ("Italkim" or "Bené Roma"); the Teimanim from Yemen; various African Jews, including most numerously the Beta Israel of Ethiopia; and Chinese Jews, most notably the Kaifeng Jews, as well as various other distinct but now almost extinct communities. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups. In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish Jews, Moroccan Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews, Afghan Jews, and various others. The Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent the bulk of modern Jewry, with at least 70 percent of Jews worldwide (and up to 90 percent prior to World War II and the Holocaust). As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish population representative of all groups, a melting pot independent of each group's proportion within the overall world Jewish population. Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the French Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40 percent of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." However, a 2025 genetic study on the Ashkenazi Jewish founder population supports the presence of a substantial Near Eastern component in the maternal lineages. Analyses of mitochondrial DNA (mtDNA) indicate that the core founder lineages, estimated at around 54, likely originated from the Near East, with these founder signatures appearing in multiple copies across the population. While later admixture introduced additional mtDNA lineages, these absorbed lineages are distinguishable from the original founders. The findings are consistent with genome-wide Identity-by-Descent and Lineage Extinction analyses, reinforcing the Near Eastern origin of the Ashkenazi maternal founders. A study showed that 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found in Pashtuns and on lower scales all major Jewish groups, Palestinians, Syrians, and Lebanese. Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardic, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly Southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations. Behar et al. have remarked on a close relationship between Ashkenazi Jews and modern Italians. A 2001 study found that Jews were more closely related to groups of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors, whose genetic signature was found in geographic patterns reflective of Islamic conquests. The studies also show that Sephardic Bnei Anusim (descendants of the "anusim" who were forced to convert to Catholicism), which comprise up to 19.8 percent of the population of today's Iberia (Spain and Portugal) and at least 10 percent of the population of Ibero-America (Hispanic America and Brazil), have Sephardic Jewish ancestry within the last few centuries. The Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries, have also been thought to have some more remote ancient Jewish ancestry. Views on the Lemba have changed and genetic Y-DNA analyses in the 2000s have established a partially Middle-Eastern origin for a portion of the male Lemba population but have been unable to narrow this down further. Although historically, Jews have been found all over the world, in the decades since World War II and the establishment of Israel, they have increasingly concentrated in a small number of countries. In 2021, Israel and the United States together accounted for over 85 percent of the global Jewish population, with approximately 45.3% and 39.6% of the world's Jews, respectively. More than half (51.2%) of world Jewry resides in just ten metropolitan areas. As of 2021, these ten areas were Tel Aviv, New York, Jerusalem, Haifa, Los Angeles, Miami, Philadelphia, Paris, Washington, and Chicago. The Tel Aviv metro area has the highest percent of Jews among the total population (94.8%), followed by Jerusalem (72.3%), Haifa (73.1%), and Beersheba (60.4%), the balance mostly being Israeli Arabs. Outside Israel, the highest percent of Jews in a metropolitan area was in New York (10.8%), followed by Miami (8.7%), Philadelphia (6.8%), San Francisco (5.1%), Washington (4.7%), Los Angeles (4.7%), Toronto (4.5%), and Baltimore (4.1%). As of 2010, there were nearly 14 million Jews around the world, roughly 0.2% of the world's population at the time. According to the 2007 estimates of The Jewish People Policy Planning Institute, the world's Jewish population is 13.2 million. This statistic incorporates both practicing Jews affiliated with synagogues and the Jewish community, and approximately 4.5 million unaffiliated and secular Jews.[citation needed] According to Sergio Della Pergola, a demographer of the Jewish population, in 2021 there were about 6.8 million Jews in Israel, 6 million in the United States, and 2.3 million in the rest of the world. Israel, the Jewish nation-state, is the only country in which Jews make up a majority of the citizens. Israel was established as an independent democratic and Jewish state on 14 May 1948. Of the 120 members in its parliament, the Knesset, as of 2016[update], 14 members of the Knesset are Arab citizens of Israel (not including the Druze), most representing Arab political parties. One of Israel's Supreme Court judges is also an Arab citizen of Israel. Between 1948 and 1958, the Jewish population rose from 800,000 to two million. Currently, Jews account for 75.4 percent of the Israeli population, or 6 million people. The early years of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe, Latin America, and North America. A trickle of immigrants from other communities has also arrived, including Indian Jews and others, as well as some descendants of Ashkenazi Holocaust survivors who had settled in countries such as the United States, Argentina, Australia, Chile, and South Africa. Some Jews have emigrated from Israel elsewhere, because of economic problems or disillusionment with political conditions and the continuing Arab–Israeli conflict. Jewish Israeli emigrants are known as yordim. The waves of immigration to the United States and elsewhere at the turn of the 19th century, the founding of Zionism and later events, including pogroms in Imperial Russia (mostly within the Pale of Settlement in present-day Ukraine, Moldova, Belarus and eastern Poland), the massacre of European Jewry during the Holocaust, and the founding of the state of Israel, with the subsequent Jewish exodus from Arab lands, all resulted in substantial shifts in the population centers of world Jewry by the end of the 20th century. More than half of the Jews live in the Diaspora (see Population table). Currently, the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world, is located in the United States, with 6 million to 7.5 million Jews by various estimates. Elsewhere in the Americas, there are also large Jewish populations in Canada (315,000), Argentina (180,000–300,000), and Brazil (196,000–600,000), and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of the Jews in Latin America). According to a 2010 Pew Research Center study, about 470,000 people of Jewish heritage live in Latin America and the Caribbean. Demographers disagree on whether the United States has a larger Jewish population than Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while others maintain that the United States still has the largest Jewish population in the world. Currently, a major national Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North African countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish community of 292,000. In Eastern Europe, the exact figures are difficult to establish. The number of Jews in Russia varies widely according to whether a source uses census data (which requires a person to choose a single nationality among choices that include "Russian" and "Jewish") or eligibility for immigration to Israel (which requires that a person have one or more Jewish grandparents). According to the latter criteria, the heads of the Russian Jewish community assert that up to 1.5 million Russians are eligible for aliyah. In Germany, the 102,000 Jews registered with the Jewish community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region, 15 to 20 percent in the Kingdom of Iraq, approximately 10 percent in the Kingdom of Egypt and approximately 7 percent in the Kingdom of Yemen. A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Muslim-majority countries, mainly in Turkey (14,200) and Iran (9,100), while Morocco (2,000), Tunisia (1,000), and the United Arab Emirates (500) host the largest communities in the Arab world. A small-scale exodus had begun in many countries in the early decades of the 20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily in Iraq, Yemen and Libya, with up to 90 percent of these communities leaving within a few years. The peak of the exodus from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80 percent of Iranian Jews left the country.[citation needed] Outside Europe, the Americas, the Middle East, and the rest of Asia, there are significant Jewish populations in Australia (112,500) and South Africa (70,000). There is also a 6,800-strong community in New Zealand. Since at least the time of the Ancient Greeks, a proportion of Jews have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is just under 50 percent; in the United Kingdom, around 53 percent; in France, around 30 percent; and in Australia and Mexico, as low as 10 percent. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice. The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations as Jews continue to assimilate into the countries in which they live.[citation needed] The Jewish people and Judaism have experienced various persecutions throughout their history. During Late Antiquity and the Early Middle Ages, the Roman Empire (in its later phases known as the Byzantine Empire) repeatedly repressed the Jewish population, first by ejecting them from their homelands during the pagan Roman era and later by officially establishing them as second-class citizens during the Christian Roman era. According to James Carroll, "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million." Later in medieval Western Europe, further persecutions of Jews by Christians occurred, notably during the Crusades—when Jews all over Germany were massacred—and in a series of expulsions from the Kingdom of England, Germany, and France. Then there occurred the largest expulsion of all, when Spain and Portugal, after the Reconquista (the Catholic Reconquest of the Iberian Peninsula), expelled both unbaptized Sephardic Jews and the ruling Muslim Moors. In the Papal States, which existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. Islam and Judaism have a complex relationship. Traditionally Jews and Christians living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad; its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic of Iran, and even in the newspapers and other publications of Turkish Refah Partisi."[better source needed] Throughout history, many rulers, empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews; the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8 percent of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 16 million Jews in 1939, almost 40% were murdered in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups of Europe during World War II by Germany and its collaborators—remains the most notable modern-day persecution of Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and Roma were crammed into ghettos before being transported hundreds of kilometres by freight train to extermination camps where, if they survived the journey, the majority of them were murdered in gas chambers. Virtually every arm of Germany's bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar has called "a genocidal nation." Throughout Jewish history, Jews have repeatedly been directly or indirectly expelled from both their original homeland, the Land of Israel, and many of the areas in which they have settled. This experience as refugees has shaped Jewish identity and religious practice in many ways, and is thus a major element of Jewish history. In summary, the pogroms in Eastern Europe, the rise of modern antisemitism, the Holocaust, as well as the rise of Arab nationalism, all served to fuel the movements and migrations of huge segments of Jewry from land to land and continent to continent until they arrived back in large numbers at their original historical homeland in Israel. In the Bible, the patriarch Abraham is described as a migrant to the land of Canaan from Ur of the Chaldees. His descendants, the Children of Israel, undertook the Exodus (meaning "departure" or "exit" in Greek) from ancient Egypt, as described in the Book of Exodus. The first movement documented in the historical record occurred with the resettlement policy of the Neo-Assyrian Empire, which mandated the deportation of conquered peoples, and it is estimated some 4,500,000 among its captive populations suffered this dislocation over three centuries of Assyrian rule. With regard to Israel, Tiglath-Pileser III claims he deported 80% of the population of Lower Galilee, some 13,520 people. Some 27,000 Israelites, 20 to 25% of the population of the Kingdom of Israel, were described as being deported by Sargon II, and were replaced by other deported populations and sent into permanent exile by Assyria, initially to the Upper Mesopotamian provinces of the Assyrian Empire. Between 10,000 and 80,000 people from the Kingdom of Judah were similarly exiled by Babylonia, but these people were then returned to Judea by Cyrus the Great of the Persian Achaemenid Empire. Many Jews were exiled again by the Roman Empire. The 2,000 year dispersion of the Jewish diaspora beginning under the Roman Empire, as Jews were spread throughout the Roman world and, driven from land to land, settled wherever they could live freely enough to practice their religion. Over the course of the diaspora the center of Jewish life moved from Babylonia to the Iberian Peninsula to Poland to the United States and, as a result of Zionism, back to Israel. There were also many expulsions of Jews during the Middle Ages and Enlightenment in Europe, including: 1290, 16,000 Jews were expelled from England, (see the Statute of Jewry); in 1396, 100,000 from France; in 1421, thousands were expelled from Austria. Many of these Jews settled in East-Central Europe, especially Poland. Following the Spanish Inquisition in 1492, the Spanish population of around 200,000 Sephardic Jews were expelled by the Spanish crown and Catholic church, followed by expulsions in 1493 in Sicily (37,000 Jews) and Portugal in 1496. The expelled Jews fled mainly to the Ottoman Empire, the Netherlands, and North Africa, others migrating to Southern Europe and the Middle East. During the 19th century, France's policies of equal citizenship regardless of religion led to the immigration of Jews (especially from Eastern and Central Europe). This contributed to the arrival of millions of Jews in the New World. Over two million Eastern European Jews arrived in the United States from 1880 to 1925. In the latest phase of migrations, the Islamic Revolution of Iran caused many Iranian Jews to flee Iran. Most found refuge in the US (particularly Los Angeles, California, and Long Island, New York) and Israel. Smaller communities of Persian Jews exist in Canada and Western Europe. Similarly, when the Soviet Union collapsed, many of the Jews in the affected territory (who had been refuseniks) were suddenly allowed to leave. This produced a wave of migration to Israel in the early 1990s. Israel is the only country with a Jewish population that is consistently growing through natural population growth, although the Jewish populations of other countries, in Europe and North America, have recently increased through immigration. In the Diaspora, in almost every country the Jewish population in general is either declining or steady, but Orthodox and Haredi Jewish communities, whose members often shun birth control for religious reasons, have experienced rapid population growth. Orthodox and Conservative Judaism discourage proselytism to non-Jews, but many Jewish groups have tried to reach out to the assimilated Jewish communities of the Diaspora in order for them to reconnect to their Jewish roots. Additionally, while in principle Reform Judaism favours seeking new members for the faith, this position has not translated into active proselytism, instead taking the form of an effort to reach out to non-Jewish spouses of intermarried couples. There is also a trend of Orthodox movements reaching out to secular Jews in order to give them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and other Jewish groups over the past 25 years, there has been a trend (known as the Baal teshuva movement) for secular Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally, there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction of becoming Jews. Contributions Jewish individuals have played a significant role in the development and growth of Western culture, advancing many fields of thought, science and technology, both historically and in modern times, including through discrete trends in Jewish philosophy, Jewish ethics and Jewish literature, as well as specific trends in Jewish culture, including in Jewish art, Jewish music, Jewish humor, Jewish theatre, Jewish cuisine and Jewish medicine. Jews have established various Jewish political movements, religious movements, and, through the authorship of the Hebrew Bible and parts of the New Testament, provided the foundation for Christianity and Islam. More than 20 percent of the awarded Nobel Prize have gone to individuals of Jewish descent. Philanthropic giving is a widespread core function among Jewish organizations. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-initial_lineup-236] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-61] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World_Trade_Center_(1973%E2%80%932001)] | [TOKENS: 11069] |
Contents World Trade Center (1973–2001) The original World Trade Center (WTC) was a complex of seven buildings in the Financial District of Lower Manhattan in New York City. Built primarily between 1966 and 1975, it was dedicated on April 4, 1973, and was destroyed on September 11, 2001. The complex included the 110-story-tall Twin Towers, at the time of their completion the tallest buildings in the world, with the original 1 World Trade Center (the North Tower) at 1,368 feet (417 m), and 2 World Trade Center (the South Tower) at 1,362 feet (415.1 m); they were also the tallest twin skyscrapers in the world until 1996, when the Petronas Towers opened in Kuala Lumpur, Malaysia. The other buildings in the complex were the Marriott World Trade Center (3 WTC), 4 WTC, 5 WTC, 6 WTC, and 7 WTC. The complex contained 13,400,000 square feet (1,240,000 m2) of office space and, prior to its completion, was projected to accommodate an estimated 130,000 people. The core complex cost about $400 million (equivalent to $2.37 billion in 2024). David Rockefeller suggested the construction of a large office building complex to help stimulate urban renewal in Lower Manhattan, and his brother Nelson, then New York's 49th governor, signed the legislation to build it. The buildings at the complex were designed by Minoru Yamasaki. In 1998, the Port Authority of New York and New Jersey decided to privatize it by leasing the buildings to a private company to manage. It awarded the lease to Silverstein Properties in July 2001. During its existence, the World Trade Center symbolized globalization and the economic power and prosperity of the United States. Although its design was initially criticized by New Yorkers and architectural critics, the Twin Towers became an icon of New York City. It had a major role in popular culture, and according to one estimate was depicted in 472 films. The Twin Towers were also used in Philippe Petit's tightrope-walking performance on August 7, 1974. Following the September 11 attacks, mentions of the complex in various media were altered or deleted, and several dozen "memorial films" were created. The World Trade Center, a symbol of New York City and a major economic center, was the target of several major crime and terrorist incidents, including a fire on February 12, 1975; a bombing on February 26, 1993; and a bank robbery on January 14, 1998. On September 11, 2001, al-Qaeda-affiliated hijackers flew two Boeing 767 jets, one into each of the Twin Towers, seventeen minutes apart; between 16,400 and 18,000 people were in the Twin Towers when they were struck. The fires from the impacts were intensified by the planes' burning jet fuel, which, along with the initial damage to the buildings' structural columns, ultimately caused both towers to collapse. The attacks killed 2,606 people in and around the towers, as well as all 147 on board the two aircraft (not including the 10 hijackers). Falling debris from the towers, combined with fires in several surrounding buildings that were initiated by falling debris, led to the partial or complete collapse of all the WTC complex's buildings, including 7 World Trade Center, and caused catastrophic damage to 10 other large structures in the surrounding area. The cleanup and recovery process at the World Trade Center site took eight months, during which the remains of the other buildings were demolished. On May 30, 2002, the last piece of WTC steel was ceremonially removed. A new World Trade Center complex is being built with six new skyscrapers and several other buildings, many of which are complete. A memorial and museum to those killed in the attacks, a new rapid transit hub, and an elevated park have opened. The memorial features two square reflecting pools in the center marking where the Twin Towers stood. One World Trade Center, the tallest building in the Western Hemisphere at 1,776 feet (541 m) and the lead building for the new complex, completed construction in May 2013 and opened in November 2014. Before the World Trade Center The western portion of the World Trade Center site was originally under the Hudson River. The shoreline was in the vicinity of Greenwich Street, which is closer to the site's eastern border. It was on this shoreline, close to the intersection of Greenwich and the former Dey Street, that Dutch explorer Adriaen Block's ship, Tyger, burned to the waterline in November 1613, stranding him and his crew and forcing them to overwinter on the island. They built the first European settlement in Manhattan. The remains of the ship were buried under landfill when the shoreline was extended beginning in 1797 and was discovered during excavation work in 1916. The remains of a second eighteenth-century ship were discovered in 2010 during excavation work at the site. The ship, believed to be a Hudson River sloop, was found just south of where the Twin Towers stood, about 20 feet (6 meters) below the surface. Later, the area became New York City's Radio Row, which existed from 1921 to 1966. The neighborhood was a warehouse district in what is now Tribeca and the Financial District. Harry Schneck opened City Radio on Cortlandt Street in 1921, and eventually, the area held several blocks of electronics stores, with Cortlandt Street as its central axis. The used radios, war surplus electronics (e.g., AN/ARC-5 radios), junk, and parts were often piled so high they would spill out onto the street, attracting collectors and scroungers. According to a business writer, it also was the origin of the electronic component distribution business. The idea of establishing a World Trade Center in New York City was first proposed in 1943. The New York State Legislature passed a bill authorizing New York Governor Thomas E. Dewey to begin developing plans for the project, but the plans were put on hold in 1949. During the late 1940s and 1950s, economic growth in New York City was concentrated in Midtown Manhattan. To help stimulate urban renewal in Lower Manhattan, David Rockefeller suggested that the Port Authority build a World Trade Center there. Plans for the use of eminent domain to remove the shops in Radio Row bounded by Vesey, Church, Liberty, and West Streets began in 1961 when the Port Authority of New York and New Jersey was deciding to build the world's first world trade center. They had two choices: the east side of Lower Manhattan, near the South Street Seaport; or the west side, near the Hudson and Manhattan Railroad (H&M) station, Hudson Terminal. Initial plans, made public in 1961, identified a site along the East River for the World Trade Center. As a bi-state agency, the Port Authority required approval for new projects from the governors of both New York and New Jersey. New Jersey Governor Robert B. Meyner objected to New York getting a $335 million project. Toward the end of 1961, negotiations with outgoing New Jersey Governor Meyner reached a stalemate. At the time, ridership on New Jersey's H&M Railroad had declined substantially—from a high of 113 million riders in 1927, to 26 million in 1958—after new automobile tunnels and bridges had opened across the Hudson River. In a December 1961 meeting between Port Authority director Austin J. Tobin and newly elected New Jersey Governor Richard J. Hughes, the Port Authority offered to take over the H&M Railroad. They also decided to move the World Trade Center project to the Hudson Terminal building site on the west side of Lower Manhattan, a more convenient location for New Jersey commuters arriving via PATH. With the new location and the Port Authority's acquisition of the H&M Railroad, New Jersey agreed to support the World Trade Center project. As part of the deal, the Port Authority renamed the H&M "Port Authority Trans-Hudson", or PATH for short. To compensate Radio Row business owners for their displacement, the Port Authority gave each business $3,000 without regard to how long the business had been there or how prosperous it was. The Port Authority began purchasing properties in the area for the World Trade Center by March 1965, and demolition of Radio Row began in March 1966. It was completely demolished by the end of the year. Approval was also needed from New York City Mayor John Lindsay and the New York City Council. Disagreements with the city centered on tax issues, and the New York City Planning Commission wanted the Port Authority to increase its payments in lieu of taxes (PILOT) to the city government. On August 3, 1966, an agreement was reached whereby the Port Authority would make annual PILOTs for the portion of the World Trade Center leased to private tenants. In subsequent years, the payments would rise as the real estate tax rate increased. In May 1967, the New York City Planning Commission approved several changes to the street grid that would allow the Port Authority to begin acquiring land for the complex. Development On September 20, 1962, the Port Authority announced the selection of Minoru Yamasaki as lead architect and Emery Roth & Sons as associate architects. Yamasaki devised the plan to incorporate twin towers. His original plan called for the towers to be 80 stories tall, but to meet the Port Authority's requirement for 10,000,000 square feet (930,000 m2) of office space, the buildings would each have to be 110 stories tall. The Port Authority publicly presented a model for the complex on January 18, 1964. The original plans called for two towers rising 1,350 feet (410 m), as well as several six-story buildings surrounding a central plaza (later the Austin J. Tobin Plaza). These buildings were to include office space, restaurants, a hotel, exhibition space, stores, and an information center about world trade. The agency released its final model of the complex in March 1966. Despite public criticism over the height of the 110-story twin towers, Yamasaki & Associates and Emery Roth & Sons made only relatively minor changes to the plans, which were released in May 1966. The revised plans called for the twin towers and four additional low-rise structures to surround the central plaza. Yamasaki's design for the World Trade Center called for a square plan approximately 208 feet (63 m) in dimension on each side. The buildings were designed with narrow office windows 18 inches (46 cm) wide, which reflected Yamasaki's fear of heights as well as his desire to make building occupants feel secure. His design included building facades clad in aluminum-alloy. The World Trade Center was one of the most striking American implementations of the architectural ethic of Le Corbusier and was the seminal expression of Yamasaki's gothic modernist tendencies. He was also inspired by Islamic architecture, elements of which he incorporated in the building's design, having previously designed Saudi Arabia's Dhahran International Airport with the Saudi Binladin Group. A major limiting factor in building height is the issue of elevators; the taller the building, the more elevators are needed to service it, requiring more space-consuming elevator banks. Yamasaki and the engineers decided to use a new system with two "sky lobbies"—floors where people could switch from a large-capacity express elevator to a local elevator that goes to each floor in a section. This system, inspired by the local-express train operation used in New York City's subway system, allowed the design to stack local elevators within the same elevator shaft. Located on the 44th and 78th floors of each tower, the sky lobbies enabled the elevators to be used efficiently. This increased the amount of usable space on each floor from 62 to 75 percent by reducing the number of elevator shafts. Altogether, the World Trade Center had 95 express and local elevators. The structural engineering firm Worthington, Skilling, Helle & Jackson worked to implement Yamasaki's design, developing the framed-tube structural system used in the twin towers. The Port Authority's Engineering Department served as foundation engineers, Joseph R. Loring & Associates as electrical engineers, and Jaros, Baum & Bolles (JB&B) as mechanical engineers. Tishman Realty & Construction Company was the general contractor on the World Trade Center project. Guy F. Tozzoli, director of the World Trade Department at the Port Authority, and Rino M. Monti, the Port Authority's Chief Engineer, oversaw the project. As an interstate agency, the Port Authority was not subject to the local laws and regulations of the City of New York, including building codes. Nonetheless, the World Trade Center's structural engineers ended up following draft versions of New York City's new 1968 building codes. The framed-tube design, introduced in the 1960s by Bangladeshi-American structural engineer Fazlur Rahman Khan, was a new approach that allowed more open floor plans than the traditional design that distributed columns throughout the interior to support building loads. Each of the World Trade Center towers had 236 high-strength, load-bearing perimeter steel columns which acted as Vierendeel trusses. The perimeter columns were spaced closely together to form a strong, rigid wall structure, supporting virtually all lateral loads such as wind loads, and sharing the gravity load with the core columns. The perimeter structure containing 59 columns per side was constructed with extensive use of prefabricated modular pieces, each consisting of three columns, three stories tall, connected by spandrel plates. The spandrel plates were welded to the columns to create the modular pieces off-site at the fabrication shop. Adjacent modules were bolted together with the splices occurring at mid-span of the columns and spandrels. The spandrel plates were located at each floor, transmitting shear stress between columns, allowing them to work together in resisting lateral loads. The joints between modules were staggered vertically so that the column splices between adjacent modules were not on the same floor. Below the 7th floor to the foundation, there were fewer, wider-spaced perimeter columns to accommodate doorways. The core of the towers housed the elevator and utility shafts, restrooms, three stairwells, and other support spaces. The core of each tower was a rectangular area 87 by 135 feet (27 by 41 m) and contained 47 steel columns running from the bedrock to the top of the tower. The large, column-free space between the perimeter and core was bridged by prefabricated floor trusses. The floors supported their own weight as well as live loads, providing lateral stability to the exterior walls and distributing wind loads among the exterior walls. The floors consisted of 4-inch (10 cm) thick lightweight concrete slabs laid on a fluted steel deck. A grid of lightweight bridging trusses and main trusses supported the floors. The trusses connected to the perimeter at alternate columns and were on 6 foot 8 inch (2.03 m) centers. The top chords of the trusses were bolted to seats welded to the spandrels on the exterior side and a channel welded to the core columns on the interior side. The floors were connected to the perimeter spandrel plates with viscoelastic dampers that helped reduce the amount of sway felt by building occupants. Hat trusses (or "outrigger trusses") located from the 107th floor to the top of the buildings were designed to support a tall communication antenna on top of each building. Only 1 WTC (North Tower) actually had a spire antenna fitted, which was added in May 1979. The truss system consisted of six trusses along the long axis of the core and four along the short axis. This truss system allowed some load redistribution between the perimeter and core columns and supported the transmission tower. The framed-tube design, using steel core and perimeter columns protected with sprayed-on fire-resistant material, created a relatively lightweight structure that would sway more in response to the wind compared to traditional structures, such as the Empire State Building that has thick, heavy masonry for fireproofing of steel structural elements. During the design process, wind tunnel tests were done to establish design wind pressures that the World Trade Center towers could be subjected to and structural response to those forces. Experiments also were done to evaluate how much sway occupants could comfortably tolerate; however, many subjects experienced dizziness and other ill effects. One of the chief engineers Leslie Robertson worked with Canadian engineer Alan G. Davenport to develop viscoelastic dampers to absorb some of the sways. These viscoelastic dampers, used throughout the structures at the joints between floor trusses and perimeter columns along with some other structural modifications, reduced the building sway to an acceptable level. In March 1965, the Port Authority began acquiring property at the World Trade Center site. Demolition work began on March 21, 1966, to clear thirteen square blocks of low rise buildings in Radio Row for its construction. Groundbreaking for the construction of the World Trade Center took place on August 5, 1966. The site of the World Trade Center was located on filled land with the bedrock located 65 feet (20 m) below. To construct the World Trade Center, it was necessary to build a "bathtub" with a slurry wall around the West Street side of the site, to keep water from the Hudson River out. The slurry method selected by the Port Authority's chief engineer, John M. Kyle Jr., involved digging a trench, and as excavation proceeded, filling the space with a slurry consisting of a mixture of bentonite and water, which plugged holes and kept groundwater out. When the trench was dug out, a steel cage was inserted and concrete was poured in, forcing the slurry out. It took fourteen months for the slurry wall to be completed. It was necessary before the excavation of material from the interior of the site could begin. The 1,200,000 cubic yards (920,000 m3) of excavated material were used (along with other fill and dredge material) to expand the Manhattan shoreline across West Street to form Battery Park City. The Port Authority awarded $74 million in contracts to various steel suppliers in January 1967, and excavation of the bathtub began in late 1967. Construction work began on the North Tower in August 1968, and construction on the South Tower was under way by January 1969. The original Hudson Tubes, which carried PATH trains into Hudson Terminal, remained in service during the construction process until 1971, when a new station opened. The topping out ceremony of 1 WTC (North Tower) took place on December 23, 1970, while 2 WTC's ceremony (South Tower) occurred on July 19, 1971. Extensive use of prefabricated components helped to speed up the construction process, and the first tenants moved into the North Tower on December 15, 1970, while it was still under construction, while the South Tower began accepting tenants in January 1972. The complex had 111 tenants with a combined 1,800 employees by late 1971. When the Twin Towers were completed, the total costs to the Port Authority had reached $900 million. The ribbon-cutting ceremony took place on April 4, 1973. In addition to the Twin Towers, the plan for the World Trade Center complex included four other low-rise buildings, which were built in the early 1970s. The 47-story 7 World Trade Center building was added in the 1980s, to the north of the main complex. Altogether, the main World Trade Center complex occupied a 16-acre (65,000 m2) superblock. Complex The World Trade Center complex housed more than 430 companies that were engaged in various commercial activities. The complex hosted 13,400,000 square feet (1,240,000 m2) of office space, which according to a 1970 account was supposed to accommodate 130,000 people. On a typical weekday, an estimated 50,000 people worked in the complex and another 140,000 passed through as visitors. The World Trade Center was so large that it had its own zip code: 10048. The towers offered expansive views from the observation deck atop the South Tower and the Windows on the World restaurant on top of the North Tower. The Twin Towers became known worldwide, appearing in numerous movies and television shows as well as on postcards and other merchandise. It became a New York icon, in the same league as the Empire State Building, the Chrysler Building, and the Statue of Liberty. The World Trade Center was compared to Rockefeller Center, which David Rockefeller's brother Nelson Rockefeller had developed in midtown Manhattan. 1 World Trade Center (the North Tower) and 2 World Trade Center (the South Tower) were the main buildings of the complex. The visually nearly identical skyscrapers, commonly referred to as the Twin Towers, were designed by architect Minoru Yamasaki as framed tube structures, an at the time novel design which provided tenants with open floor plans, uninterrupted by columns or walls. Construction of the towers began in 1966. When completed in 1972, 1 World Trade Center became the tallest building in the world for two years, surpassing the Empire State Building after its 40-year reign. The North Tower stood 1,368 feet (417 m) tall and featured a 362 foot (110 m) telecommunications antenna or mast that was built on the roof in 1979 (which was upgraded in 1999 to accommodate DTV broadcasts). With this addition, the highest point of the North Tower reached 1,730 feet (530 m). Chicago's Willis Tower, then called Sears Tower, which was finished in May 1973, reached 1,450 feet (440 m) at the rooftop. When completed in 1973, the South Tower became the second tallest building in the world at 1,362 feet (415 m). Its rooftop observation deck was 1,362 ft (415 m) high and its indoor observation deck was 1,310 ft (400 m) high. Each tower stood over 1,350 feet (410 m) high, and occupied about 1 acre (4,000 m2) of the total 16 acres (65,000 m2) of the site's land. During a press conference in 1973, Yamasaki was asked, "Why two 110-story buildings? Why not one 220-story building?" His tongue-in-cheek response was: "I didn't want to lose the human scale." Architectural critic Ada Louise Huxtable criticized the design of the twin towers when they were first announced, saying: "Here we have the world's daintiest architecture for the world's biggest buildings." The Twin Towers had more floors (at 110) than any other building before the completion of the Sears Tower in 1973. This number of floors was not surpassed until the construction of the Burj Khalifa, which opened in 2010. The towers were also the world's tallest twin buildings until 1996, when the Petronas Towers opened. Each tower had a total mass of around 500,000 tons. The original World Trade Center had a five-acre (two-hectare) plaza around which all of the buildings in the complex, including the Twin Towers, were centered. World Trade Center officials had wanted the plaza to be a "contemplative space" or a Zen garden. In 1982, the plaza was renamed after Port Authority's late chairman, Austin J. Tobin, who authorized the construction of the original World Trade Center. During the summer, the Port Authority installed a portable stage, typically backed up against the North Tower within Tobin Plaza for musicians and performers. The series of concerts and events was called "OnStage at the Twin Towers". At the center of the plaza stood the monumental sculpture The Sphere by German artist Fritz Koenig. The bronze sculpture was located at the center of a fountain, completing a full rotation every 24 hours. The site had other sculptures such as Ideogram, Cloud Fortress, and the 1993 World Trade Center Bombing Memorial fountain. The plaza was pervaded by Muzak background music that came from installed loudspeakers. For many years, the Plaza was often beset by brisk winds at ground level owing to the Venturi effect between the two towers. Some gusts were so strong that pedestrians' travel had to be aided by ropes. In 1997 Tony May opened an Italian restaurant in the plaza next to 4 World Trade Center called "Gemelli". The following year, he opened another restaurant in an adjacent place called "Pasta Break". On June 9, 1999, the outdoor plaza reopened after undergoing $12 million in renovations. This involved replacing marble pavers with over 40,000 gray and pink granite stones, as well as adding benches, planters, food kiosks, and outdoor dining areas. Although most of the space in the World Trade Center complex was off-limits to the public, the South Tower featured a public glass-enclosed observation deck on the 107th floor called Top of the World and an open-air deck with the height of 110 stories. The observation deck opened in December 1975 and operated from 9:30 a.m. to 11:30 p.m. (June to August) and from 9:30 a.m. – 9:30 p.m. (September to May). After paying an entrance fee in the second floor, visitors were required to pass through security checks added after the 1993 World Trade Center bombing. They were then sent to the 107th-floor indoor observatory at a height of 1,310 feet (400 m) by a dedicated express elevator, which could be only accessed by entering the core. The exterior columns were narrowed to allow 28 inches of window width between them. In 1995, the Port Authority leased operation of the observatory to Ogden Entertainment, which decided to renovate it. On April 30, 1997, the Top of the World tour reopened after renovations were finished. Attractions added to the observation deck included 24 video monitors, which provided descriptions of 44 points of interest in six languages; a theater showing a film of a simulated helicopter tour around the city called "Manhattan Magic"; a model of Manhattan with 750 buildings; a Kodak photo booth and two gift shops. The 107th-floor also featured a subway-themed food court that featured Sbarro Street Station and Nathan's Famous Hot Dogs with a dining area that simulated Central Park. Weather permitting, visitors could ride two short escalators up from the 107th-floor viewing area to an outdoor platform at a height of 1,377 ft (420 m). On a clear day, visitors could see up to 50 miles (80 km). An anti-suicide fence was placed on the roof itself, with the viewing platform set back and elevated above it, requiring only an ordinary railing. This left the view unobstructed, unlike the observation deck of the Empire State Building. Windows on the World, the restaurant on the North Tower's 106th and 107th floors, opened in April 1976. It was developed by restaurateur Joe Baum at a cost of more than $17 million. As well as the main restaurant, two offshoots were located at the top of the North Tower: Hors d'Oeuvrerie (offered a Danish smorgasbord during the day and sushi in the evening) and Cellar in the Sky (a small wine bar). Windows on the World also had a wine school program run by Kevin Zraly, who published a book on the course. Windows on the World was forced to close following the 1993 World Trade Center bombing as the explosion damaged receiving areas, storage and parking spots used by the restaurant complex. On May 12, 1994, the Joseph Baum & Michael Whiteman Company won the contract to run the restaurants after Windows's former operator, Inhilco, gave up its lease. After its reopening on June 26, 1996, the Greatest Bar on Earth and Cellar in the Sky (reopened after Labor Day) replaced the original restaurant offshoots. In 1999, Cellar in the Sky was changed into an American steakhouse and renamed as Wild Blue. In 2000 (its last full year of operation), Windows on the World reported revenues of $37 million, making it the highest-grossing restaurant in the United States. The Skydive Restaurant, which was a 180-seat cafeteria on the 44th floor of 1 WTC conceived for office workers, was also operated by Windows on the World. In its last iteration, Windows on the World received mixed reviews. Ruth Reichl, a New York Times food critic, said in December 1996 that "nobody will ever go to Windows on the World just to eat, but even the fussiest food person can now be content dining at one of New York's favorite tourist destinations". She gave the restaurant two out of four stars, signifying a "very good" quality. In his 2009 book Appetite, William Grimes wrote that, "At Windows, New York was the main course". In 2014, Ryan Sutton of Eater.com compared the now-destroyed restaurant's cuisine to that of its replacement, One World Observatory. He said, "Windows helped usher in a new era of captive audience dining in that the restaurant was a destination in itself, rather than a lazy by-product of the vital institution it resided in." Five smaller buildings stood on the 16-acre (65,000 m2) block. One was the 22-floor hotel, which opened at the southwest corner of the site in 1981 as the Vista Hotel; in 1995, it became the Marriott World Trade Center (3 WTC). Three low-rise buildings (4 WTC, 5 WTC, and 6 WTC), which were steel-framed office buildings, also stood around the plaza. 6 World Trade Center, at the northwest corner, housed the United States Customs Service. 5 World Trade Center was located at the northeast corner above the PATH station, and 4 World Trade Center, located at the southeast corner, housed the U.S. Commodities Exchange. In 1987, construction was completed on a 47-floor office building, 7 World Trade Center, located to the north of the superblock. Beneath the World Trade Center complex was an underground shopping mall. It had connections to various mass transit facilities, including the New York City Subway system and the Port Authority's PATH trains. One of the world's largest gold depositories was located underneath the World Trade Center, owned by a group of commercial banks. The 1993 bombing detonated close to the vault. Seven weeks after the September 11 attacks, $230 million in precious metals was removed from basement vaults of 4 WTC. This included 3,800 100-Troy-ounce 24 carat gold bars and 30,000 1,000-ounce silver bars. Major events On February 12, 1975, a three-alarm fire broke out on the North Tower's 11th floor. It spread to the 9th and 14th floors after igniting telephone cable insulation in a utility shaft that ran vertically between floors. Areas at the furthest extent of the fire were extinguished almost immediately; the original fire was put out in a few hours. Most of the damage was concentrated on the 11th floor, fueled by cabinets filled with paper, alcohol-based fluid for office machines, and other office equipment. Fireproofing protected the steel and there was no structural damage to the tower. In addition to fire damage on the 9th through the 14th floors, the water used to extinguish the fire damaged a few of the floors below. At that time, the World Trade Center had no fire sprinkler systems. On March 12, 1981, the Port Authority announced a $45 million plan to install sprinklers throughout the World Trade Center. A disgruntled custodian, 19 year old Oswald Adorno, was discovered to have deliberately started the fire and was criminally charged. The first terrorist attack on the World Trade Center occurred on February 26, 1993, at 12:17 p.m. A Ryder truck filled with 1,500 pounds (680 kg) of explosives (planted by Ramzi Yousef) detonated in the North Tower's underground garage. The blast opened a 100 ft (30 m) hole through five sublevels, with the greatest damage occurring on levels B1 and B2 and significant structural damage on level B3. Six people were killed and 1,042 others were injured in the attacks, some from smoke inhalation. Sheikh Omar Abdel Rahman and four other individuals were later convicted for their involvement in the bombing, while Yousef and Eyad Ismoil were convicted for carrying out the bombing. According to a presiding judge, the conspirators' chief aim at the time of the attack was to destabilize the North Tower and send it crashing into the South Tower, toppling both skyscrapers. Following the bombing, floors that were blown out needed to be repaired to restore the structural support they provided to columns. The slurry wall was in peril following the bombing and the loss of the floor slabs that provided lateral support against pressure from Hudson River water on the other side. The refrigeration plant on sublevel B5, which provided air conditioning to the entire World Trade Center complex, was heavily damaged. After the bombing, the Port Authority installed photoluminescent pathway markings in the stairwells. The fire alarm system for the entire complex needed to be replaced because critical wiring and signaling in the original system were destroyed. The South Tower did not reopen for tenants until March 18, 1993, while the North Tower remained closed until April 1. The cost to repair both buildings was estimated at $250 million. The Vista International Hotel at 3 World Trade Center remained closed until November 1, 1994, after extensive repairs and renovations that amounted to $65 million. A memorial to the victims of the bombing, a reflecting pool, was installed with the names of those who were killed in the blast. It was later destroyed following the September 11 attacks. The names of the victims of the 1993 bombing are included in the National September 11 Memorial & Museum. In January 1998, a DeCavalcante crime family member Ralph Guarino gained maintenance access to the World Trade Center. He arranged a three-man crew for a heist that netted over $2 million from a Brinks delivery to the North Tower's 11th floor. On the morning of August 7, 1974, Philippe Petit performed a high-wire walk between the North and South Towers of the World Trade Center. For his unauthorized feat 1,312 feet (400 m) above the ground, he rigged a 440-pound (200 kg) cable and used a custom-made 30-foot-long (9.1 m), 55-pound (25 kg) balancing pole. He performed for 45 minutes, making eight passes along the wire. Though Petit was charged with criminal trespass and disorderly conduct, he was later freed in exchange for performing for children in Central Park. On February 20, 1981, Aerolíneas Argentinas Flight 342, operated by a Boeing 707-387B, nearly hit the transmitting antenna of the North Tower of the World Trade Center in New York during its approach to John F. Kennedy International Airport. The air traffic controller's intervention avoided the impact with less than 90 seconds of distance between the aircraft and the North Tower. The 1995 PCA world chess championship was played on the 107th floor of the South Tower. On February 8, 2000, a worker was crushed to death and another was injured by an industrial cooling unit. On August 11, 2000, an elevator passed its stop at the 78th floor and slammed into the top of the elevator shaft. Eight people were taken to hospitals, and another four were treated at the scene. Slow leasing was a hallmark of the old World Trade Center complex. The Twin Towers suffered high vacancy rates for decades. The complex achieved 95% occupancy only in mid-2001. Following the Port Authority's approved plans to privatize the World Trade Center in the late 1990s, they sought to lease it to a private entity in 2001. Bids for the lease came from Vornado Realty Trust; a joint bid between Brookfield Properties Corporation and Boston Properties; and a joint bid by Silverstein Properties and the Westfield Group. Privatizing the World Trade Center would add it to the city's tax rolls and provide funds for other Port Authority projects. On February 15, 2001, the Port Authority announced that Vornado Realty Trust had won the World Trade Center lease, paying $3.25 billion for the 99-year lease. Vornado outbid Silverstein by $600 million though Silverstein upped his offer to $3.22 billion. However, Vornado insisted on last minute changes to the deal, including a shorter 39-year lease, which the Port Authority considered nonnegotiable. Vornado later withdrew and Silverstein's bid for the lease to the World Trade Center was accepted on April 26, 2001, and closed on July 24, 2001. Destruction On September 11, 2001, Islamist terrorists hijacked two commercial airliners and crashed them into the Twin Towers. One group led by Mohamed Atta crashed American Airlines Flight 11 into the northern facade of the North Tower at 8:46:40 a.m.; the aircraft struck between the 93rd and 99th floors. Seventeen minutes later, at 9:03:11 a.m.,[a] a second group led by Marwan al-Shehhi crashed the similarly hijacked United Airlines Flight 175 into the southern facade of the South Tower, striking it between the 77th and 85th floors. The terrorist organization al-Qaeda, led by Osama bin Laden, carried out the attacks in retaliation for certain aspects of American foreign policy, particularly U.S. support of Israel and the presence of U.S. troops in Saudi Arabia. The damage caused to the North Tower by Flight 11 destroyed any means of escape from above the impact zone, trapping 1,344 people. Flight 175 had a much more off-centered impact compared to Flight 11, and a single stairwell was left intact; however, only a few people managed to descend successfully before the tower collapsed. Although the South Tower was struck lower than the North Tower, thus affecting more floors, a smaller number (fewer than 700) were killed instantly or trapped. At 9:59 a.m., the South Tower collapsed after burning for approximately 56 minutes. The fire caused steel structural elements, already weakened from the plane's impact, to fail. The North Tower collapsed at 10:28 a.m., after burning for approximately 102 minutes. At 5:20 p.m., 7 World Trade Center began to collapse with the crumbling of the east penthouse and collapsed completely at 5:21 p.m. due to uncontrolled fires causing structural failure. The Marriott World Trade Center hotel was destroyed during the two towers' collapse. The three remaining buildings in the WTC plaza were extensively damaged by debris and later demolished. The cleanup and recovery process at the World Trade Center site took eight months. A small church, the St. Nicholas Greek Orthodox Church, sat directly south of the towers and was destroyed during the South Tower's collapse. It was rebuilt and opened in December 2022. The Deutsche Bank Building across Liberty Street from the World Trade Center complex was later condemned because of the uninhabitable toxic conditions inside; it was deconstructed, with work completed in early 2011. The Borough of Manhattan Community College's Fiterman Hall at 30 West Broadway was also condemned due to extensive damage, being eventually demolished and completely rebuilt. In the immediate aftermath of the attacks, media reports suggested that tens of thousands might have been killed in the attacks, as over 50,000 people could have been inside the World Trade Center. The National Institute of Standards and Technology (NIST) estimated approximately 17,400 individuals were in the towers at the time of the attacks. Ultimately, 2,753 death certificates (excluding those for hijackers) were filed relating to the 9/11 attacks. There were 2,192 civilians who died in and around the World Trade Center, including 658 employees of Cantor Fitzgerald L.P. (an investment bank on the 101st to 105th floors of One World Trade Center), 295 employees of Marsh & McLennan Companies (located immediately below Cantor Fitzgerald on floors 93 to 101, the location of Flight 11's impact), and 175 employees of Aon Corporation. In addition to the civilian deaths, 414 sworn personnel were also killed: 343 New York City Fire Department (FDNY) firefighters, including 2 FDNY paramedics and 1 FDNY chaplain, and 71 law enforcement officers, including 37 members of the Port Authority Police Department (PAPD) and 23 members of the New York City Police Department (NYPD). Eight EMS personnel from private agencies also died in the attacks. Ten years after the attacks, the remains of only 1,629 victims had been identified. Of all the people who were still in the towers when they collapsed, only 20 were pulled out alive. After the collapse of the World Trade Center, the World Trade Center site became known as "Ground Zero". New World Trade Center Over the following years, plans were created for the reconstruction of the World Trade Center. The Lower Manhattan Development Corporation (LMDC), established in November 2001 to oversee the rebuilding process, organized competitions to select a site plan and memorial design. Memory Foundations, designed by Daniel Libeskind, was selected as the master plan; however, substantial changes were made to the design. The first new building at the site was 7 WTC, which opened on May 23, 2006. The memorial section of the National September 11 Memorial & Museum opened on September 11, 2011, and the museum opened on May 21, 2014. 1 WTC opened on November 3, 2014; 4 WTC opened on November 13, 2013; and 3 WTC opened on June 11, 2018. In November 2013, according to an agreement made with Silverstein Properties Inc., the new 2 WTC would not be built to its full height until sufficient space was leased to make the building financially viable. Above-ground construction of 5 WTC was also suspended due to a lack of tenants as well as disputes between the Port Authority and the Lower Manhattan Development Corporation. In mid-2015, Silverstein Properties revealed plans for a redesigned 2 WTC, to be designed by Bjarke Ingels and completed by 2020 with News Corp as anchor tenant. Four years later, with no anchor tenant for 2 WTC, Silverstein expressed his intent to resume work on the tower regardless of whether a tenant had signed. Impact and legacy Plans to build the World Trade Center were controversial. Its site was the location of Radio Row, home to hundreds of commercial and industrial tenants, property owners, small businesses, and approximately 100 residents, many of whom fiercely resisted forced relocation. A group of affected small businesses sought an injunction challenging the Port Authority's power of an eminent domain. The case made its way through the court system to the United States Supreme Court, which refused to hear the case. Private real-estate developers and members of the Real Estate Board of New York, led by Empire State Building owner Lawrence A. Wien, expressed concerns about this much "subsidized" office space going on the open market, competing with the private sector, when there was already a glut of vacancies; the World Trade Center itself was not rented out completely until after 1979, and only then due to subsidies. Another critic questioned whether the Port Authority should have taken on a project described by some as a "mistaken social priority". The World Trade Center's design aesthetics attracted criticism from the American Institute of Architects and other groups. Lewis Mumford, author of The City in History and other works on urban planning, criticized the project, describing it and other new skyscrapers as "just glass-and-metal filing cabinets". The Twin Towers were described as looking similar to "the boxes that the Empire State Building and the Chrysler Building came in". Cultural critic Paul Fussell dismissed the Twin Towers for being "charmless, merely tall, and huge blunt buildings" in his 1991 book BAD Or, The Dumbing of America, adding that they were "brutal and despotic. Dull and witless, expressive only of dumb, raw power, they are widely touted as among the major achievements of the late twentieth century." In his 1996 book Home from Nowhere, James Howard Kunstler wrote in regard to skyscrapers like the Twin Towers and the Empire State Building that "it was probably necessary for mankind's collective ego to prove that such buildings . . . could be built," but he added that they were not worth the "distortions of population density" that they produced, and that they also "overload neighborhoods and strain the infrastructure." In his book The Pentagon of Power, Lewis Mumford described the World Trade Center as an "example of the purposeless giantism and technological exhibitionism that are now eviscerating the living tissue of every great city". Other critics disliked the twin towers' narrow office windows, which were 18 inches (46 cm) wide and framed by pillars that restricted views on each side to narrow slots. Activist and sociologist Jane Jacobs argued the waterfront should be kept open for New Yorkers to enjoy. Retrospectively, the American Institute of Architects ranked the World Trade Center complex as 19th among 150 buildings in its List of America's Favorite Architecture, published in 2007. The original World Trade Center created a superblock that cut through the area's street grid, isolating the complex from the rest of the community. The Port Authority had demolished several streets to make way for the towers within the World Trade Center. The project involved combining the 12-block area bounded by Vesey, Church, Liberty, and West Streets on the north, east, south, and west, respectively. 7 World Trade Center was built on the superblock's north side in the late 1980s over another block of Greenwich Street. The building acted as a physical barrier separating Tribeca to the north and the Financial District to the south. The underground mall at the World Trade Center also drew shoppers away from surrounding streets. The project was seen as being monolithic and overambitious, with the design having had no public input. By contrast, the rebuilding plans had significant public input. The public supported rebuilding a street grid through the World Trade Center site. One of the rebuilding proposals included building an enclosed shopping street along the path of Cortlandt Street, one of the streets demolished to make room for the original World Trade Center. The Port Authority ultimately decided to rebuild Cortlandt, Fulton, and Greenwich Streets, which were destroyed during the original World Trade Center's construction. Before its destruction, the World Trade Center was a New York City icon, and the Twin Towers were the centerpiece that represented the entire complex. They were used in film and TV projects as "establishing shots", standing for New York City as a whole. In 1999, one writer noted: "Nearly every guidebook in New York City lists the Twin Towers among the city's top ten attractions." Among the films that used the complex as a filming location was Sidney Lumet's The Wiz (1978), where the World Trade Center was used to represent Emerald City. Some films such as King Kong (1976) used the World Trade Center's Twin Towers as a setting for lengthy scenes, while others such as Independence Day (1996) and Home Alone 2: Lost in New York (1992) depicted the complex only briefly. There were several high-profile events that occurred at the World Trade Center. The most notable was held at the original WTC in 1974. French high wire acrobatic performer Philippe Petit walked between the two towers on a tightrope, as shown in the documentary film Man on Wire (2008) and depicted in the feature film The Walk (2015). Petit walked between the towers eight times on a steel cable. In 1975, Owen J. Quinn base-jumped from the roof of the North Tower and safely landed on the plaza between the buildings. Quinn claimed that he was trying to publicize the plight of the poor. On May 26, 1977, Brooklyn toymaker George Willig scaled the exterior of the South Tower. He later said, "It looked unscalable; I thought I'd like to try it." Six years later, high-rise firefighting and rescue advocate Dan Goodwin successfully climbed the outside of the North Tower to call attention to the inability to rescue people potentially trapped in the upper floors of skyscrapers. The complex was featured in numerous works of popular culture; in 2006, it was estimated that the World Trade Center had appeared in some form in 472 films. Several iconic meanings were attributed to the World Trade Center. Film critic David Sterritt, who lived near the complex, said that the World Trade Center's appearance in the 1978 film Superman "summarized a certain kind of American grandeur [...] the grandeur, I would say, of sheer American powerfulness". Remarking on the towers' destruction in the 1996 film Independence Day, Sterritt said: "The Twin Towers have been destroyed in various disaster movies that were made before 9/11. That became something that you couldn't do even retroactively after 9/11." Other motifs included romance, depicted in the 1988 film Working Girl, and corporate avarice, depicted in Wall Street (1987) and The Bonfire of the Vanities (1990). Comic books, animated cartoons, television shows, video games, and music videos also used the complex as a setting. After the September 11 attacks, some movies and TV shows deleted scenes or episodes set within the World Trade Center. For example, The Simpsons episode "The City of New York vs. Homer Simpson", which first aired in 1997, was removed from syndication after the attacks because a scene showed the World Trade Center. Songs that mentioned the World Trade Center were no longer aired on radio, and the release dates of some films, such as the 2001–2002 films Sidewalks of New York; People I Know; and Spider-Man were delayed so producers could remove film and poster scenes that included the World Trade Center. The 2001 film Kissing Jessica Stein, which was shown at the Toronto International Film Festival the day before the attacks, had to be modified before its general public release so the filmmakers could delete scenes that depicted the World Trade Center. Other episodes and films mentioned the attacks directly or depicted the World Trade Center in alternate contexts. The production of some family-oriented films was also sped up due to a large demand for the genre after the attacks. Demand for horror and action films decreased, but within a short time demand returned to normal. By the attacks' first anniversary, over sixty "memorial films" had been created. Filmmakers were criticized for removing scenes related to the World Trade Center. Rita Kempley of The Washington Post said "if we erase the towers from our art, we erase it [sic] from our memories". Author Donald Langmead compared the phenomenon to the 1949 novel Nineteen Eighty-Four, where historic mentions of events are retroactively "rectified". Other filmmakers such as Michael Bay, who directed the 1998 film Armageddon, opposed retroactively removing references to the World Trade Center based on post-9/11 attitudes. Oliver Stone's film World Trade Center—the first movie that specifically examined the attacks' effects on the World Trade Center as contrasted with the effects elsewhere—was released in 2006. Several years after the attacks, works such as "The City of New York vs. Homer Simpson" were placed back in syndication. The National September 11 Museum has preserved many of the works that feature depictions of the original World Trade Center. Logo The World Trade Center used two different logos over its lifetime. The first logo, used from 1973 to 1993, consisted of two open rectangles, one of which was upside down. When the complex reopened after the 1993 bombing, a new logo was unveiled, consisting of the towers encircled by a globe. This logo was found throughout the complex and was printed on commemorative mugs given out to tenants with the caption "Welcome back to the World Trade Center". See also Footnotes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ames_Research_Center] | [TOKENS: 5596] |
Contents Ames Research Center The Ames Research Center (ARC), also known as NASA Ames, is a major NASA research center at Moffett Federal Airfield in California's Silicon Valley. It was founded in 1939 as the second National Advisory Committee for Aeronautics (NACA) laboratory. That agency was dissolved and its assets and personnel transferred to the newly created National Aeronautics and Space Administration (NASA) on October 1, 1958. NASA Ames is named in honor of Joseph Sweetman Ames, a physicist and one of the founding members of NACA. At last estimate NASA Ames had over US$3 billion in capital equipment, 2,300 research personnel and a US$750 million annual budget. Ames was founded to conduct wind-tunnel research on the aerodynamics of propeller-driven aircraft; however, its role has expanded to encompass spaceflight and information technology. Ames plays a role in many NASA missions. It provides leadership in astrobiology; small satellites; robotic lunar exploration; the search for habitable planets; supercomputing; intelligent/adaptive systems; advanced thermal protection; planetary science; and airborne astronomy. Ames also develops tools for a safer, more efficient national airspace. The center's current director is Eugene Tu. The site was mission center for several key missions (Kepler, the Stratospheric Observatory for Infrared Astronomy (SOFIA), Interface Region Imaging Spectrograph) and a major contributor to the "new exploration focus" as a participant in the Orion crew exploration vehicle. Missions Although Ames is a NASA Research Center, and not a flight center, it has nevertheless been closely involved in a number of astronomy and space missions. The Pioneer program's eight successful space missions from 1965 to 1978 were managed by Charles Hall at Ames, initially aimed at the inner Solar System. By 1972, it supported the bold flyby missions to Jupiter and Saturn with Pioneer 10 and Pioneer 11. Those two missions were trail blazers (radiation environment, new moons, gravity-assist flybys) for the planners of the more complex Voyager 1 and Voyager 2 missions, launched five years later. In 1978, the end of the program brought about a return to the inner solar system, with the Pioneer Venus Orbiter and Multiprobe, this time using orbital insertion rather than flyby missions. Lunar Prospector was the third mission selected by NASA for full development and construction as part of the Discovery Program. At a cost of $62.8 million, the 19-month mission was put into a low polar orbit of the Moon, accomplishing mapping of surface composition and possible polar ice deposits, measurements of magnetic and gravity fields, and study of lunar outgassing events. Based on Lunar Prospector Neutron Spectrometer (NS) data, mission scientists have determined that there is indeed water ice in the polar craters of the Moon. The mission ended July 31, 1999, when the orbiter was guided to an impact into a crater near the lunar south pole in an (unsuccessful) attempt to analyze lunar polar water by vaporizing it to allow spectroscopic characterization from Earth telescopes. The 11-pound (5 kg) GeneSat-1, carrying bacteria inside a miniature laboratory, was launched on December 16, 2006. The very small NASA satellite has proven that scientists can quickly design and launch a new class of inexpensive spacecraft—and conduct significant science. The Lunar Crater Observation and Sensing Satellite (LCROSS) mission to look for water on the Moon was a 'secondary payload spacecraft.' LCROSS began its trip to the Moon on the same rocket as the Lunar Reconnaissance Orbiter (LRO), which continues to conduct a different lunar task. It launched in April 2009 on an Atlas V rocket from Kennedy Space Center, Florida. The Kepler mission was NASA's first mission capable of finding Earth-size and smaller planets. The Kepler mission monitored the brightness of stars to find planets that pass in front of them during the planets' orbits. During such passes or 'transits,' the planets will slightly decrease the star's brightness. Stratospheric Observatory for Infrared Astronomy (SOFIA) was a joint venture of the U.S. and German aerospace agencies, NASA and the German Aerospace Center (DLR) to make an infrared telescope platform that can fly at altitudes high enough to be in the infrared-transparent regime above the water vapor in the Earth's atmosphere. The aircraft was supplied by the U.S., and the infrared telescope by Germany. Modifications of the Boeing 747SP airframe to accommodate the telescope, mission-unique equipment and large external door were made by L-3 Communications Integrated Systems of Waco, Texas. The Interface Region Imaging Spectrograph mission is a partnership with the Lockheed Martin Solar and Astrophysics Laboratory to understand the processes at the boundary between the Sun's chromosphere and corona. This mission is sponsored by the NASA Small Explorer program. The Lunar Atmosphere Dust Environment Explorer (LADEE) mission has been developed by NASA Ames. This successfully launched to the Moon on September 6, 2013. In addition, Ames has played a support role in a number of missions, most notably the Mars Pathfinder and Mars Exploration Rover missions, where the Ames Intelligent Robotics Laboratory played a key role. NASA Ames was a partner on the Mars Phoenix, a Mars Scout Program mission to send a high-latitude lander to Mars, deployed a robotic arm to dig trenches up to 1.6 feet (one half meter) into the layers of water ice and analyzing the soil composition. Ames is also a partner on the Mars Science Laboratory and its Curiosity rover, a next generation Mars rover to explore for signs of organics and complex molecules. Aviation systems The Aviation Systems Division conducts research and development in two primary areas: air traffic management, and high-fidelity flight simulation. For air traffic management, researchers are creating and testing concepts to allow for up to three times today's level of aircraft in the national airspace. Automation and its attendant safety consequences are key foundations of the concept development. Historically, the division has developed products that have been implemented for the flying public, such as the Traffic Management Adviser, which is being deployed nationwide. For high-fidelity flight simulation, the division operates the world's largest flight simulator (the Vertical Motion Simulator), a Level-D 747-400 simulator, and a panoramic air traffic control tower simulator. These simulators have been used for a variety of purposes including continued training for Space Shuttle pilots, development of future spacecraft handling qualities, helicopter control system testing, Joint Strike Fighter evaluations, and accident investigations. Personnel in the division have a variety of technical backgrounds, including guidance and control, flight mechanics, flight simulation, and computer science. Customers outside NASA have included the FAA, DOD, DHS, DOT, NTSB, Lockheed Martin, and Boeing. The center's flight simulation and guidance laboratory was listed on the National Register of Historic Places in 2017. Information technology Ames is the home of NASA's large research and development divisions in advanced supercomputing, human factors, and artificial intelligence (Intelligent Systems). These Research & Development organizations support NASA's Exploration efforts, as well as the continued operations of the International Space Station, and the space science and Aeronautics work across NASA. The center also runs and maintains the E Root nameserver of the DNS. The Intelligent Systems Division (Code TI) is NASA's leading R&D Division developing advanced intelligent software and systems for all of NASA Mission Directorates. It provides software expertise for earth science applications, aeronautics, space science missions, International Space Station, and the Crewed Exploration Vehicle (CEV). The first AI in space (Deep Space 1) was developed from Code TI, as is the MAPGEN software that daily plans the activities for the Mars Exploration Rovers, the same core reasoner is used for Ensemble to operate Phoenix Lander, and the planning system for the International Space Station's solar arrays. Integrated System Health Management for the International Space Station's control moment gyroscopes, collaborative systems with semantic search tools, and robust software engineering round out the scope of Code TI's work. The Human Systems Integration Division "advances human-centered design and operations of complex aerospace systems through analysis, experimentation, and modeling of human performance and human-automation interaction to make dramatic improvements in safety, efficiency and mission success". For decades, the Human Systems Integration Division has been on the leading edge of human-centered aerospace research. The Division is home to over 100 researchers, contractors and administrative staff. The NASA Advanced Supercomputing Division at Ames operates several of the agency's most powerful supercomputers, including the petaflop-scale Pleiades, Aitken, and Electra systems. Originally called the Numerical Aerodynamic Simulation Division, the facility has housed more than 40 production and test supercomputers since its construction in 1987, and has served as a leader in high-performance computing, developing technology used across the industry, including the NAS Parallel Benchmarks and the Portable Batch System (PBS) job scheduling software. In September 2009, Ames launched NEBULA as a fast and powerful Cloud Computing Platform to handle NASA's massive data sets that complied with security requirements. This innovative pilot uses open-source components, complies with FISMA and can scale to Government-sized demands while being extremely energy efficient. In July 2010, NASA CTO Chris C. Kemp open sourced Nova, the technology behind the NEBULA Project, in collaboration with Rackspace, launching OpenStack. OpenStack has subsequently become one of the largest and fastest growing open source projects in the history of computing, and as of 2014[update] has been included in most major distributions of Linux including Red Hat, Oracle, HP, SUSE, and Canonical. Image processing NASA Ames was one of the first locations to conduct research on image processing of satellite-platform aerial photography. Some of the pioneering techniques of contrast enhancement using Fourier analysis were developed at Ames in conjunction with researchers at ESL Inc. Wind tunnels The NASA Ames Research Center wind tunnels are known not only for their immense size, but also for their diverse characteristics that enable various kinds of scientific and engineering research. The Unitary Plan Wind Tunnel (UPWT) was completed in 1956 at a cost of $27 million under the Unitary Plan Act of 1949. Since its completion, the UPWT facility has been the most heavily used NASA wind tunnel within the NASA Wind Tunnel Fleet. Every major commercial transport and almost every military jet built in the United States over the last 40 years has been tested in this facility. Mercury, Gemini, and Apollo spacecraft, as well as Space Shuttle, were also tested in this tunnel complex. Ames Research Center also houses the world's largest wind tunnel, part of the National Full-Scale Aerodynamic Complex (NFAC): it is large enough to test full-sized planes, rather than scale models. The complex of wind tunnels was listed on the National Register in 2017. The 40 by 80 foot wind tunnel circuit was originally constructed in the 1940s and is now capable of providing test velocities up to 300 knots (560 km/h; 350 mph). It is used to support an active research program in aerodynamics, dynamics, model noise, and full-scale aircraft and their components. The aerodynamic characteristics of new configurations are investigated with an emphasis on estimating the accuracy of computational methods. The tunnel is also used to investigate the aeromechanical stability boundaries of advanced rotorcraft and rotor-fuselage interactions. Stability and control derivatives are also determined, including the static and dynamic characteristics of new aircraft configurations. The acoustic characteristics of most of the full-scale vehicles are also determined, as well as acoustic research aimed at discovering and reducing aerodynamic sources of noise. In addition to the normal data gathering methods (e.g., balance system, pressure measuring transducers, and temperature sensing thermocouples), state-of-the-art, non-intrusive instrumentation (e.g., laser velocimeters and shadowgraphs) are available to help determine flow direction and velocity in and around the lifting surfaces of aircraft. The 40 by 80 Foot Wind Tunnel is primarily used for determining the low- and medium-speed aerodynamic characteristics of high-performance aircraft, rotorcraft, and fixed wing, powered-lift V/STOL aircraft. The 80 by 120 Foot Wind Tunnel is the world's largest wind tunnel test section. This open circuit leg was added and a new fan drive system was installed in the 1980s. It is currently capable of air speeds up to 100 knots (190 km/h; 120 mph). This section is used in similar ways to the 40 by 80 foot section, but it is capable of testing larger aircraft, albeit at slower speeds. Some of the test programs that have come through the 80 by 120 Foot include: F-18 High Angle of Attack Vehicle, DARPA/Lockheed Common Affordable Lightweight Fighter, XV-15 Tilt Rotor, and Advance Recovery System Parafoil. The 80 by 120 foot test section is capable of testing a full size Boeing 737. Although decommissioned by NASA in 2003, the NFAC is now being operated by the United States Air Force as a satellite facility of the Arnold Engineering Development Complex (AEDC). Arc Jet Complex The Ames Arc Jet Complex is an advanced thermophysics facility where sustained hypersonic- and hyperthermal testing of vehicular thermoprotective systems takes place under a variety of simulated flight- and re-entry conditions. Of its seven available test bays, four currently contain Arc Jet units of differing configurations. These are the Aerodynamic Heating Facility (AHF), the Turbulent Flow Duct (TFD), the Panel Test Facility (PTF), and the Interaction Heating Facility (IHF). The support equipment includes two D.C. power supplies, a steam ejector-driven vacuum system, a water-cooling system, high-pressure gas systems, data acquisition system, and other auxiliary systems. The largest power supply is capable of delivering 75 megawatts (MW) over 30 minutes or 150 MW over 15 seconds, which, coupled with a high-volume 5-stage steam ejector vacuum-pumping system, allows Ames to match high-altitude atmospheric conditions with large samples. The Thermo-Physics Facilities Branch operates four arc jet facilities. The Interaction Heating Facility (IHF), with an available power of over 60-MW, is one of the highest-power arc jets available. It is a very flexible facility, capable of long run times of up to one hour, and able to test large samples in both a stagnation and flat plate configuration. The Panel Test Facility (PTF) uses a unique semielliptic nozzle for testing panel sections. Powered by a 20-MW arc heater, the PTF can perform tests on samples for up to 20 minutes. The Turbulent Flow Duct provides supersonic, turbulent high temperature air flows over flat surfaces. The TFD is powered by a 20-MW Hüls arc heater and can test samples 203 by 508 millimeters (8.0 by 20.0 in) in size. The Aerodynamic Heating Facility (AHF) has similar characteristics to the IHF arc heater, offering a wide range of operating conditions, sample sizes and extended test times. A cold-air-mixing plenum allows for simulations of ascent or high-speed flight conditions. Catalycity studies using air or nitrogen can be performed in this flexible rig. A 5-arm model support system allows the user to maximize testing efficiency. The AHF can be configured with either a Hüls or segmented arc heater, up to 20-MW. 1 MW is enough power to supply 750 homes. The Arc Jet Complex was listed on the National Register in 2017. Range complex The Ames Vertical Gun Range (AVGR) was designed to conduct scientific studies of lunar impact processes in support of the Apollo missions. In 1979, it was established as a National Facility, funded through the Planetary Geology and Geophysics Program. In 1995, increased scientific needs across various disciplines resulted in joint core funding by three different science programs at NASA Headquarters (Planetary Geology and Geophysics, Exobiology, and Solar System Origins). In addition, the AVGR provides programmatic support for various proposed and ongoing planetary missions (e.g. Stardust, Deep Impact). Using its 0.30 cal light-gas gun and powder gun, the AVGR can launch projectiles to velocities ranging from 500 to 7,000 m/s (1,600 to 23,000 ft/s; 1,100 to 15,700 mph). By varying the gun's angle of elevation with respect to the target vacuum chamber, impact angles from 0° to 90° relative to the gravitational vector are possible. This unique feature is extremely important in the study of crater formation processes. The target chamber is approximately 2.5 meters (8 ft 2 in) in diameter and height and can accommodate a wide variety of targets and mounting fixtures. It can maintain vacuum levels below 0.03 torrs (4.0 Pa), or can be back filled with various gases to simulate different planetary atmospheres. Impact events are typically recorded with high-speed video/film, or Particle Image Velocimetry (PIV). The Hypervelocity Free-Flight (HFF) Range currently comprises two active facilities: the Aerodynamic Facility (HFFAF) and the Gun Development Facility (HFFGDF). The HFFAF is a combined Ballistic Range and Shock-tube Driven Wind Tunnel. Its primary purpose is to examine the aerodynamic characteristics and flow-field structural details of free-flying aeroballistic models. The HFFAF has a test section equipped with 16 shadowgraph-imaging stations. Each station can be used to capture an orthogonal pair of images of a hypervelocity model in flight. These images, combined with the recorded flight time history, can be used to obtain critical aerodynamic parameters such as lift, drag, static and dynamic stability, flow characteristics, and pitching moment coefficients. For very high Mach number (M > 25) simulations, models can be launched into a counter-flowing gas stream generated by the shock tube. The facility can also be configured for hypervelocity impact testing and has an aerothermodynamic capability as well. The HFFAF is currently configured to operate the 1.5 inches (38 mm) light-gas gun in support of continuing thermal imaging and transition research for NASA's hypersonics program. The HFFGDF is used for gun performance enhancement studies, and occasional impact testing. The Facility uses the same arsenal of light-gas and powder guns as the HFFAF to accelerate particles that range in size from 3.2 to 25.4 millimeters (0.13 to 1.00 in) diameter to velocities ranging from 0.5 to 8.5 km/s (1,100 to 19,000 mph). Most of the research effort to date has centered on Earth atmosphere entry configurations (Mercury, Gemini, Apollo, and Shuttle), planetary entry designs (Viking, Pioneer Venus, Galileo and MSL), and aerobraking (AFE) configurations. The facility has also been used for scramjet propulsion studies (National Aerospace Plane (NASP)) and meteoroid/orbital debris impact studies (Space Station and RLV). In 2004, the facility was utilized for foam-debris dynamics testing in support of the Return To Flight effort. As of March 2007, the GDF has been reconfigured to operate a cold gas gun for subsonic CEV capsule aerodynamics. The Electric Arc Shock Tube (EAST) Facility is used to investigate the effects of radiation and ionization that occur during very high velocity atmospheric entries. In addition, the EAST can also provide air-blast simulations requiring the strongest possible shock generation in air at an initial pressure loading of 1 standard atmosphere (100 kPa) or greater. The facility has three separate driver configurations, to meet a range of test requirements: the driver can be connected to a diaphragm station of either a 102 millimeters (4.0 in) or a 610 millimeters (24 in) shock tube, and the high-pressure 102 millimeters (4.0 in) shock tube can also drive a 762 millimeters (30.0 in) shock tunnel. Energy for the drivers is supplied by a 1.25-MJ-capacitor storage system. List of center directors The following persons had served as the Ames Research Center director: United States Geological Survey (USGS) In September 2016, the United States Geological Survey (USGS) announced plans to relocate its West Coast science center from nearby Menlo Park to the Ames Research Center at Moffett Field. The relocation is expected to take five years and will begin in 2017 with 175 of the USGS employees moving to Moffett. The relocation is designed to save money on the $7.5 million annual rent the USGS pays for its Menlo Park campus. The land in Menlo Park is owned by the General Services Administration, which is required by federal law to charge market-rate rent. Education The NASA Experience exhibit at the Chabot Space and Science Center serves as the visitor center for NASA's Ames Research Center. The NASA Experience provides a dynamic and interactive space for the public to learn about local contributions to space exploration across the years. From models of spacecraft and genuine spacesuits from as early as the Mercury and Gemini missions to artifacts related to NASA's upcoming Artemis missions, the NASA Ames Visitor Center gives visitors access to over 80 years of Ames history and a look into current and future projects. Ames' expertise in wind tunnel testing, rover design and testing, space robotics, supercomputing, and more is on display. The exhibit was opened on November 12, 2021. The NASA Ames Exploration Center is a science museum and education center for NASA. There are displays and interactive exhibits about NASA technology, missions and space exploration. A Moon rock, meteorite, and other geologic samples are on display. The theater shows movies with footage from NASA's explorations of Mars and the planets, and about the contributions of the scientists at Ames. This facility is currently closed. In 1999, Mark León developed NASA's Robotics Education Project — now called the Robotics Alliance Project — under his mentor Dave Lavery, which has reached over 100,000 students nationwide using FIRST robotics and BOTBALL robotics competitions. The Project's FIRST branch originally comprised FRC Team 254: "The Cheesy Poofs", an all-male team from Bellarmine High School in San Jose, California. In 2006, Team 1868: "The Space Cookies", an all-female team, was founded in collaboration with the Girl Scouts. In 2012, Team 971: "Spartan Robotics" of Mountain View High School joined the Project, though the team continues to operate at their school. All three teams are highly decorated. All three have won Regional competitions, two have won the FIRST Championship, two have won the Regional Chairman's Award, and one is a Hall of Fame team. The three teams are collectively referred to as "House teams". The mission of the project is "To create a human, technical, and programmatic resource of robotics capabilities to enable the implementation of future robotic space exploration missions." Public-private partnerships The federal government has re-tasked portions of the facility and human resources to support private sector industry, research, and education. HP became the first corporate affiliate of a new Bio-Info-Nano Research and Development Institute (BIN-RDI); a collaborative venture established by the University of California Santa Cruz and NASA, based at Ames. The Bio|Info|Nano R&D Institute is dedicated to creating scientific breakthroughs by the convergence of biotechnology, information technology, and nanotechnology. Singularity University hosts its leadership and educational program at the facility. The Organ Preservation Alliance is also headquartered there; the Alliance is a nonprofit organization that works in partnership with the Methuselah Foundation's New Organ Prize "to catalyze breakthroughs on the remaining obstacles towards the long-term storage of organs" to overcome the drastic unmet medical need for viable organs for transplantation. Kleenspeed Technologies is headquartered there. On September 28, 2005, Google and Ames Research Center disclosed details to a long-term research partnership. In addition to pooling engineering talent, Google planned to build a 1,000,000-square-foot (9.3 ha) facility on the ARC campus. One of the projects between Ames, Google, and Carnegie Mellon University is the Gigapan Project – a robotic platform for creating, sharing, and annotating terrestrial gigapixel images. The Planetary Content Project seeks to integrate and improve the data that Google uses for its Google Moon and Google Mars projects. On 4 June 2008, Google announced it had leased 42 acres (170,000 m2) from NASA, at Moffett Field, for use as office space and employee housing. Construction of the new Google project which is near Google's Googleplex headquarters began in 2013 and has a target opening date in 2015. It is called "Bay View" as it overlooks San Francisco Bay. In May 2013, Google announced that it was launching the Quantum Artificial Intelligence Lab, to be hosted by ARC. The lab will house a 512 qubit quantum computer from D-Wave Systems, and the Universities Space Research Association (USRA) will invite researchers from around the world to share time on it. The goal being to study how quantum computing might advance machine learning. Announced on November 10, 2014, Planetary Ventures LLC (a Google subsidiary) will lease the Moffett Federal Airfield from NASA Ames, a site of about 1,000 acres formerly costing the agency $6.3 million annually in maintenance and operation costs. The lease includes the restoration of the site's historic landmark Hangar One, as well as hangars Two and Three. The lease went into effect in March 2015, and spans 60 years. Living and working at Ames An official NASA ID is required to enter Ames. In support of families working at NASA Ames Research Center, the Ames Child Care Center (ACCC) was opened in 1986. The center's goal is to serve the children of NASA employees, civil servants, contractors, and military employees working at Ames Research Center and Moffett Federal Air Field. The ACCC moved to a new on-site location in 2002 as a result of additional funding from NASA and private donors. In 2005, the ACCC opened to the general public, though at increased tuition rates compared to ACCC affiliates. There are myriad activities both inside the research center and around the base for full-time workers and interns alike. Portions of a fitness trail remain inside the base (also called a Parcourse trail), Sections of it are now inaccessible due to changes in base layout since it was installed. See also References External links 37°24′55″N 122°03′46″W / 37.415229°N 122.062650°W / 37.415229; -122.062650 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Centaur] | [TOKENS: 2945] |
Contents Centaur A centaur (/ˈsɛntɔːr, ˈsɛntɑːr/ SEN-tor, SEN-tar; Ancient Greek: κένταυρος, romanized: kéntauros; Latin: centaurus), occasionally hippocentaur, also called Ixionidae (Ancient Greek: Ἰξιονίδαι, romanized: Ixionídai, lit. 'sons of Ixion'), is a creature from Greek mythology with the upper body of a human and the lower body and legs of a horse that was said to live in the mountains of Thessaly. In one version of the myth, the centaurs were named after Centaurus, and, through his brother Lapithes, were kin to the legendary tribe of the Lapiths. Centaurs are thought of in many Greek myths as being as wild as untamed horses, and were said to have inhabited the region of Magnesia and Mount Pelion in Thessaly, the Foloi oak forest in Elis, and the Malean peninsula in southern Laconia. Centaurs are subsequently featured in Roman mythology, and were familiar figures in the medieval bestiary. They remain a staple of modern fantastic literature. Etymology The Greek word kentauros is generally regarded as being of obscure origin. The etymology from ken + tauros, 'piercing bull', was a euhemerist suggestion in Palaephatus' rationalizing text on Greek mythology, On Incredible Tales (Περὶ ἀπίστων), which included mounted archers from a village called Nephele eliminating a herd of bulls that were the scourge of Ixion's kingdom. Another possible related etymology can be "bull-slayer". Mythology The centaurs were usually said to have been born of Ixion and Nephele. As the story goes, Nephele was a cloud made into the likeness of Hera in a plot to trick Ixion into revealing his lust for Hera to Zeus. Ixion seduced Nephele and from that relationship centaurs were created. Another version, however, makes them children of Centaurus, a man who mated with the Magnesian mares. Centaurus was either himself the son of Ixion and Nephele (inserting an additional generation) or of Apollo and the nymph Stilbe. In the latter version of the story, Centaurus's twin brother was Lapithes, ancestor of the Lapiths. Another tribe of centaurs was said to have lived on Cyprus. According to Nonnus, the Cyprian Centaurs were fathered by Zeus, who, in frustration after Aphrodite had eluded him, spilled his seed on the ground of that land. Unlike those of mainland Greece, the Cyprian centaurs were ox-horned. There were also the Lamian Pheres, twelve rustic daimones (spirits) of the Lamos river. They were set by Zeus to guard the infant Dionysos, protecting him from the machinations of Hera, but the enraged goddess transformed them into ox-horned Centaurs unrelated to the Cyprian Centaurs. The Lamian Pheres later accompanied Dionysos in his campaign against the Indians. The centaur's half-human, half-horse composition has led many writers to treat them as liminal beings, caught between the two natures they embody in contrasting myths; they are both the embodiment of untamed nature, as in their battle with the Lapiths (their kin), and conversely, teachers like Chiron. They are often depicted as wild, untamed, virile, lascivious, and displaying great feats of strength such as carrying rocks or boulders. The Centaurs are best known for their fight with the Lapiths who, according to one origin myth, would have been cousins to the centaurs. The battle, called the Centauromachy, was caused by the centaurs' attempt to carry off Hippodamia and the rest of the Lapith women on the day of Hippodamia's marriage to Pirithous, who was the king of the Lapithae and a son of Ixion. Theseus, a hero and founder of cities, who happened to be present, threw the balance in favour of the Lapiths by assisting Pirithous in the battle. The Centaurs were driven off or destroyed. Another Lapith hero, Caeneus, who was invulnerable to weapons, was beaten into the earth by Centaurs wielding rocks and the branches of trees. In her article "The Centaur: Its History and Meaning in Human Culture", Elizabeth Lawrence claims that the contests between the centaurs and the Lapiths typify the struggle between civilization and barbarism. The Centauromachy is most famously portrayed in the metopes of the Parthenon by Phidias and in the Battle of the Centaurs, a relief by Michelangelo. Origin of the myth The most common theory holds that the idea of centaurs came from the first reaction of a non-riding culture, as in the Minoan Aegean world, to nomads who were mounted on horses. The theory suggests that such riders would appear as half-man, half-animal. Bernal Díaz del Castillo reported that the Aztecs also had this misapprehension about Spanish cavalrymen. The Lapith tribe of Thessaly, who were the kinsmen of the Centaurs in myth, were described as the inventors of horse-riding by Greek writers. The Thessalian tribes also claimed their horse breeds were descended from the centaurs. Robert Graves (relying on the work of Georges Dumézil, who argued for tracing the centaurs back to the Indian Gandharva), speculated that the centaurs were a dimly remembered, pre-Hellenic fraternal earth cult who had the horse as a totem. A similar theory was incorporated into Mary Renault's The Bull from the Sea. Variations Though female centaurs, called centaurides or centauresses, are not mentioned in early Greek literature and art, they do appear occasionally in later antiquity. A Macedonian mosaic of the 4th century BC is one of the earliest examples of the centauress in art. Ovid also mentions a centauress named Hylonome[i] who committed suicide when her husband Cyllarus was killed in the war with the Lapiths. The Kalibangan cylinder seal, dated to be around 2600–1900 BC, found at the site of Indus-Valley civilization shows a battle between men in the presence of centaur-like creatures. Other sources claim the creatures represented are actually half human and half tigers, later evolving into the Hindu Goddess of War. These seals are also evidence of Indus-Mesopotamia relations in the 3rd millennium BC. In a popular legend associated with Pazhaya Sreekanteswaram Temple in Thiruvananthapuram, the curse of a saintly Brahmin transformed a handsome Yadava prince into a creature having a horse's body and the prince's head, arms, and torso in place of the head and neck of the horse. Kinnaras, another half-man, half-horse mythical creature from Indian mythology, appeared in various ancient texts, arts, and sculptures from all around India. It is shown as a horse with the torso of a man where the horse's head would be, and is similar to a Greek centaur. A centaur-like half-human, half-equine creature called Polkan appeared in Russian folk art and lubok prints of the 17th–19th centuries. Polkan is originally based on Pulicane, a half-dog from Andrea da Barberino's poem I Reali di Francia, which was once popular in the Slavonic world in prosaic translations. Artistic representations The extensive Mycenaean pottery found at Ugarit included two fragmentary Mycenaean terracotta figures which have been tentatively identified as centaurs. This finding suggests a Bronze Age origin for these creatures of myth. A painted terracotta centaur was found in the "Hero's tomb" at Lefkandi, and by the Geometric period, centaurs figure among the first representational figures painted on Greek pottery. An often-published Geometric period bronze of a warrior face-to-face with a centaur is at the Metropolitan Museum of Art. In Greek art of the Archaic period, centaurs are depicted in three different forms. There are also paintings and motifs on amphorae and Dipylon cups which depict winged centaurs. Centaurs were also frequently depicted in Roman art. One example is the pair of centaurs drawing the chariot of Constantine the Great and his family in the Great Cameo of Constantine (circa AD 314–16), which embodies wholly pagan imagery, and contrasts sharply with the popular image of Constantine as the patron of early Christianity. Centaurs preserved a Dionysian connection in the 12th-century Romanesque carved capitals of Mozac Abbey in the Auvergne. Other similar capitals depict harvesters, boys riding goats (a further Dionysiac theme), and griffins guarding the chalice that held the wine. Centaurs are also shown on a number of Pictish carved stones from north-east Scotland erected in the 8th–9th centuries AD (e.g., at Meigle, Perthshire). Though outside the limits of the Roman Empire, these depictions appear to be derived from Classical prototypes. The John C. Hodges library at The University of Tennessee hosts a permanent exhibit of a "Centaur from Volos" in its library. The exhibit, made by sculptor Bill Willers by combining a study human skeleton with the skeleton of a Shetland pony, is entitled "Do you believe in Centaurs?". According to the exhibitors, it was meant to mislead students in order to make them more critically aware. Depictions of centaurs in a mythical land located south beyond the world's known continents appear on a map by Urbano Monti from 1587, sometimes called Monti's Planisphere. Centaurs are common in European heraldry, although more frequent in continental than in British arms. A centaur holding a bow is referred to as a sagittary or sagittarius. Literature Jerome's version of the Life of St Anthony the Great, written by Athanasius of Alexandria about the hermit monk of Egypt, was widely disseminated in the Middle Ages; it relates Anthony's encounter with a centaur who challenged the saint, but was forced to admit that the old gods had been overthrown. The episode was often depicted in The Meeting of St Anthony Abbot and St Paul the Hermit by the painter Stefano di Giovanni, who was known as "Sassetta". Of the two episodic depictions of the hermit Anthony's travel to greet the hermit Paul, one is his encounter with the demonic figure of a centaur along the pathway in a wood. Lucretius, in his first-century BC philosophical poem On the Nature of Things, denied the existence of centaurs, based on the differing rates of growth of human and equine anatomies. Specifically, he states that at the age of three years, horses are in the prime of their life while humans at the same age are still little more than babies, making hybrid animals impossible. Centaurs are among the creatures which 14th-century Italian poet Dante placed as guardians in his Inferno. In Canto XII, Dante and his guide Virgil meet a band led by Chiron and Pholus, guarding the bank of Phlegethon in the seventh circle of Hell, a river of boiling blood in which the violent against their neighbours are immersed, shooting arrows into any who move to a shallower spot than their allotted station. The two poets are treated with courtesy, and Nessus guides them to a ford. In Canto XXIV, in the eighth circle, in Bolgia 7, a ditch where thieves are confined, they meet but do not converse with Cacus (who is a giant in the ancient sources), wreathed in serpents and with a fire-breathing dragon on his shoulders, arriving to punish a sinner who has just cursed God. In his Purgatorio, an unseen spirit on the sixth terrace cites the centaurs ("the drunken double-breasted ones who fought Theseus") as examples of the sin of gluttony. C.S. Lewis's The Chronicles of Narnia series portrays centaurs as wise and courageous creatures, who are gifted in fields such as astronomy and medicine. John Updike's 1963 novel The Centaur contains numerous references to mythological centaurs. The author depicts a rural Pennsylvanian town as seen through the optics of the myth of the centaur. An unknown and marginalized local school teacher, just like the mythological Chiron did for Prometheus, gave up his life for the future of his son who had chosen to be an independent artist in New York. In J.K. Rowling's Harry Potter series, centaurs inhabit the Forbidden Forest near Hogwarts, and are talented archers and healers; they are also known to their proficiency in astrology. The centaurs in Rick Riordan's Percy Jackson & the Olympians are portrayed as wild party-goers, with the exception of Chiron, who serves as the main director of activities at the series' demigod training facility. Gallery See also Other hybrid creatures appear in Greek mythology, always with some liminal connection that links Hellenic culture with archaic or non-Hellenic cultures: Also, Additionally, Bucentaur, the name of several historically important Venetian vessels, was linked to a posited ox-centaur or βουκένταυρος (boukentauros) by fanciful and likely spurious folk-etymology. Footnotes Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-AutoNT-3_23-0] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Logical_disjunction] | [TOKENS: 1822] |
Contents Logical disjunction In logic, disjunction (also known as logical disjunction, logical or, logical addition, or inclusive disjunction) is a logical connective typically notated as ∨ {\displaystyle \lor } and read aloud as "or". For instance, the English language sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formula S ∨ W {\displaystyle S\lor W} , assuming that S {\displaystyle S} abbreviates "it is sunny" and W {\displaystyle W} abbreviates "it is warm". In classical logic, disjunction is given a truth functional semantics according to which a formula ϕ ∨ ψ {\displaystyle \phi \lor \psi } is true unless both ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } are false. Because this semantics allows a disjunctive formula to be true when both of its disjuncts are true, it is an inclusive interpretation of disjunction, in contrast with exclusive disjunction. Classical proof theoretical treatments are often given in terms of rules such as disjunction introduction and disjunction elimination. Disjunction has also been given numerous non-classical treatments, motivated by problems including Aristotle's sea battle argument, Heisenberg's uncertainty principle, as well as the numerous mismatches between classical disjunction and its nearest equivalents in natural languages. An operand of a disjunction is a disjunct. Inclusive and exclusive disjunction Because the logical or means a disjunction formula is true when either one or both of its parts are true, it is referred to as an inclusive disjunction. This is in contrast with an exclusive disjunction, which is true when one or the other of the arguments is true, but not both (referred to as exclusive or, or XOR). When it is necessary to clarify whether inclusive or exclusive or is intended, English speakers sometimes use the phrase and/or. In terms of logic, this phrase is identical to or, but makes the inclusion of both being true explicit. Notation In logic and related fields, disjunction is customarily notated with an infix operator ∨ {\displaystyle \lor } (Unicode U+2228 ∨ LOGICAL OR). Alternative notations include + {\displaystyle +} , used mainly in electronics, as well as | {\displaystyle \vert } and | | {\displaystyle \vert \!\vert } in many programming languages. The English word or is sometimes used as well, often in capital letters. In Jan Łukasiewicz's prefix notation for logic, the operator is A {\displaystyle A} , short for Polish alternatywa (English: alternative). In mathematics, the disjunction of an arbitrary number of elements a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} can be denoted as an iterated binary operation using a larger ⋁ (Unicode U+22C1 ⋁ N-ARY LOGICAL OR): ⋁ i = 1 n a i = a 1 ∨ a 2 ∨ … a n − 1 ∨ a n {\displaystyle \bigvee _{i=1}^{n}a_{i}=a_{1}\lor a_{2}\lor \ldots a_{n-1}\lor a_{n}} Classical disjunction In the semantics of logic, classical disjunction is a truth functional operation which returns the truth value true unless both of its arguments are false. Its semantic entry is standardly given as follows:[a] This semantics corresponds to the following truth table: In classical logic systems where logical disjunction is not a primitive, it can be defined in terms of the primitive and ( ∧ {\displaystyle \land } ) and not ( ¬ {\displaystyle \lnot } ) as: Alternatively, it may be defined in terms of implies ( → {\displaystyle \to } ) and not as: The latter can be checked by the following truth table: It may also be defined solely in terms of → {\displaystyle \to } : It can be checked by the following truth table: The following properties apply to disjunction: Applications in computer science Operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations. Examples: The or operator can be used to set bits in a bit field to 1, by or-ing the field with a constant field with the relevant bits set to 1. For example, x = x | 0b00000001 will force the final bit to 1, while leaving other bits unchanged.[citation needed] Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages following C, bitwise disjunction is performed with the single pipe operator (|), and logical disjunction with the double pipe (||) operator. Logical disjunction is usually short-circuited; that is, if the first (left) operand evaluates to true, then the second (right) operand is not evaluated. The logical disjunction operator thus usually constitutes a sequence point. In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one terminates with value true, the other is interrupted. This operator is thus called the parallel or. Although the type of a logical disjunction expression is Boolean in most languages (and thus can only have the value true or false), in some languages (such as Python and JavaScript), the logical disjunction operator returns one of its operands: the first operand if it evaluates to a true value, and the second operand otherwise. This allows it to fulfill the role of the Elvis operator. The Curry–Howard correspondence relates a constructivist form of disjunction to tagged union types.[citation needed] Set theory The membership of an element of a union set in set theory is defined in terms of a logical disjunction: x ∈ A ∪ B ⇔ ( x ∈ A ) ∨ ( x ∈ B ) {\displaystyle x\in A\cup B\Leftrightarrow (x\in A)\vee (x\in B)} . Because of this, logical disjunction satisfies many of the same identities as set-theoretic union, such as associativity, commutativity, distributivity, and de Morgan's laws, identifying logical conjunction with set intersection, logical negation with set complement. Natural language Disjunction in natural languages does not precisely match the interpretation of ∨ {\displaystyle \lor } in classical logic. Notably, classical disjunction is inclusive while natural language disjunction is often understood exclusively, as the following English example typically would be. This inference has sometimes been understood as an entailment, for instance by Alfred Tarski, who suggested that natural language disjunction is ambiguous between a classical and a nonclassical interpretation. More recent work in pragmatics has shown that this inference can be derived as a conversational implicature on the basis of a semantic denotation which behaves classically. However, disjunctive constructions including Hungarian vagy... vagy and French soit... soit have been argued to be inherently exclusive, rendering ungrammaticality in contexts where an inclusive reading would otherwise be forced. Similar deviations from classical logic have been noted in cases such as free choice disjunction and simplification of disjunctive antecedents, where certain modal operators trigger a conjunction-like interpretation of disjunction. As with exclusivity, these inferences have been analyzed both as implicatures and as entailments arising from a nonclassical interpretation of disjunction. In many languages, disjunctive expressions play a role in question formation. For instance, while the above English example can be interpreted as a polar question asking whether it's true that Mary is either a philosopher or a linguist, it can also be interpreted as an alternative question asking which of the two professions is hers. The role of disjunction in these cases has been analyzed using nonclassical logics such as alternative semantics and inquisitive semantics, which have also been adopted to explain the free choice and simplification inferences. In English, as in many other languages, disjunction is expressed by a coordinating conjunction. Other languages express disjunctive meanings in a variety of ways, though it is unknown whether disjunction itself is a linguistic universal. In many languages such as Dyirbal and Maricopa, disjunction is marked using a verb suffix. For instance, in the Maricopa example below, disjunction is marked by the suffix šaa. Johnš John-NOM Billš Bill-NOM vʔaawuumšaa 3-come-PL-FUT-INFER Johnš Billš vʔaawuumšaa John-NOM Bill-NOM 3-come-PL-FUT-INFER 'John or Bill will come.' See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jews#cite_note-:332-39] | [TOKENS: 15852] |
Contents Jews Jews (Hebrew: יְהוּדִים, ISO 259-2: Yehudim, Israeli pronunciation: [jehuˈdim]), or the Jewish people, are an ethnoreligious group and nation, originating from the Israelites of ancient Israel and Judah. They traditionally adhere to Judaism. Jewish ethnicity, religion, and community are highly interrelated, as Judaism is an ethnic religion, though many ethnic Jews do not practice it. Religious Jews regard converts to Judaism as members of the Jewish nation, pursuant to the long-standing conversion process. The Israelites emerged from the pre-existing Canaanite peoples to establish Israel and Judah in the Southern Levant during the Iron Age. Originally, Jews referred to the inhabitants of the kingdom of Judah and were distinguished from the gentiles and the Samaritans. According to the Hebrew Bible, these inhabitants predominately originate from the tribe of Judah, who were descendants of Judah, the fourth son of Jacob. The tribe of Benjamin were another significant demographic in Judah and were considered Jews too. By the late 6th century BCE, Judaism had evolved from the Israelite religion, dubbed Yahwism (for Yahweh) by modern scholars, having a theology that religious Jews believe to be the expression of the Mosaic covenant between God and the Jewish people. After the Babylonian exile, Jews referred to followers of Judaism, descendants of the Israelites, citizens of Judea, or allies of the Judean state. Jewish migration within the Mediterranean region during the Hellenistic period, followed by population transfers, caused by events like the Jewish–Roman wars, gave rise to the Jewish diaspora, consisting of diverse Jewish communities that maintained their sense of Jewish history, identity, and culture. In the following millennia, Jewish diaspora communities coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (Central and Eastern Europe), the Sephardim (Iberian Peninsula), and the Mizrahim (Middle East and North Africa). While these three major divisions account for most of the world's Jews, there are other smaller Jewish groups outside of the three. Prior to World War II, the global Jewish population reached a peak of 16.7 million, representing around 0.7% of the world's population at that time. During World War II, approximately six million Jews throughout Europe were systematically murdered by Nazi Germany in a genocide known as the Holocaust. Since then, the population has slowly risen again, and as of 2021[update], was estimated to be at 15.2 million by the demographer Sergio Della Pergola or less than 0.2% of the total world population in 2012.[b] Today, over 85% of Jews live in Israel or the United States. Israel, whose population is 73.9% Jewish, is the only country where Jews comprise more than 2.5% of the population. Jews have significantly influenced and contributed to the development and growth of human progress in many fields, both historically and in modern times, including in science and technology, philosophy, ethics, literature, governance, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Jews founded Christianity and had an indirect but profound influence on Islam. In these ways and others, Jews have played a significant role in the development of Western culture. Name and etymology The term "Jew" is derived from the Hebrew word יְהוּדִי Yehudi, with the plural יְהוּדִים Yehudim. Endonyms in other Jewish languages include the Ladino ג׳ודיו Djudio (plural ג׳ודיוס, Djudios) and the Yiddish ייִד Yid (plural ייִדן Yidn). Though Genesis 29:35 and 49:8 connect "Judah" with the verb yada, meaning "praise", scholars generally agree that "Judah" most likely derives from the name of a Levantine geographic region dominated by gorges and ravines. The gradual ethnonymic shift from "Israelites" to "Jews", regardless of their descent from Judah, although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE) of the Tanakh. Some modern scholars disagree with the conflation, based on the works of Josephus, Philo and Apostle Paul. The English word "Jew" is a derivation of Middle English Gyw, Iewe. The latter was loaned from the Old French giu, which itself evolved from the earlier juieu, which in turn derived from judieu/iudieu which through elision had dropped the letter "d" from the Medieval Latin Iudaeus, which, like the New Testament Greek term Ioudaios, meant both "Jew" and "Judean" / "of Judea". The Greek term was a loan from Aramaic *yahūdāy, corresponding to Hebrew יְהוּדִי Yehudi. Some scholars prefer translating Ioudaios as "Judean" in the Bible since it is more precise, denotes the community's origins and prevents readers from engaging in antisemitic eisegesis. Others disagree, believing that it erases the Jewish identity of Biblical characters such as Jesus. Daniel R. Schwartz distinguishes "Judean" and "Jew". Here, "Judean" refers to the inhabitants of Judea, which encompassed southern Palestine. Meanwhile, "Jew" refers to the descendants of Israelites that adhere to Judaism. Converts are included in the definition. But Shaye J.D. Cohen argues that "Judean" is inclusive of believers of the Judean God and allies of the Judean state. Another scholar, Jodi Magness, wrote the term Ioudaioi refers to a "people of Judahite/Judean ancestry who worshipped the God of Israel as their national deity and (at least nominally) lived according to his laws." The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.), in Arabic, "Jude" in German, "judeu" in Portuguese, "Juif" (m.)/"Juive" (f.) in French, "jøde" in Danish and Norwegian, "judío/a" in Spanish, "jood" in Dutch, "żyd" in Polish etc., but derivations of the word "Hebrew" are also in use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian (Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish) is the origin of the word "Yiddish". According to The American Heritage Dictionary of the English Language, fourth edition (2000), It is widely recognized that the attributive use of the noun Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are now several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun. Identity Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is used.[better source needed] Generally, in modern secular usage, Jews include three groups: people who were born to a Jewish family regardless of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background or lineage who have formally converted to Judaism and therefore are followers of the religion. In the context of biblical and classical literature, Jews could refer to inhabitants of the Kingdom of Judah, or the broader Judean region, allies of the Judean state, or anyone that followed Judaism. Historical definitions of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions. These definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud, around 200 CE. Interpretations by Jewish sages of sections of the Tanakh – such as Deuteronomy 7:1–5, which forbade intermarriage between their Israelite ancestors and seven non-Israelite nations: "for that [i.e. giving your daughters to their sons or taking their daughters for your sons,] would turn away your children from following me, to serve other gods"[failed verification] – are used as a warning against intermarriage between Jews and gentiles. Leviticus 24:10 says that the son in a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra 10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. A popular theory is that the rape of Jewish women in captivity brought about the law of Jewish identity being inherited through the maternal line, although scholars challenge this theory citing the Talmudic establishment of the law from the pre-exile period. Another argument is that the rabbis changed the law of patrilineal descent to matrilineal descent due to the widespread rape of Jewish women by Roman soldiers. Since the anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first, the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim). Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent could not contract a legal marriage, offspring would follow the mother. Rabbi Rivon Krygier follows a similar reasoning, arguing that Jewish descent had formerly passed through the patrilineal descent and the law of matrilineal descent had its roots in the Roman legal system. Origins The prehistory and ethnogenesis of the Jews are closely intertwined with archaeology, biology, historical textual records, mythology, and religious literature. The ethnic origin of the Jews lie in the Israelites, a confederation of Iron Age Semitic-speaking tribes that inhabited a part of Canaan during the tribal and monarchic periods. Modern Jews are named after and also descended from the southern Israelite Kingdom of Judah. Gary A. Rendsburg links the early Canaanite nomadic pastoralists confederation to the Shasu known to the Egyptians around the 15th century BCE. According to the Hebrew Bible narrative, Jewish history begins with the Biblical patriarchs such as Abraham, his son Isaac, Isaac's son Jacob, and the Biblical matriarchs Sarah, Rebecca, Leah, and Rachel, who lived in Canaan. The twelve sons of Jacob subsequently gave birth to the Twelve Tribes. Jacob and his family migrated to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. Jacob's descendants were later enslaved until the Exodus, led by Moses. Afterwards, the Israelites conquered Canaan under Moses' successor Joshua, and went through the period of the Biblical judges after the death of Joshua. Through the mediation of Samuel, the Israelites were subject to a king, Saul, who was succeeded by David and then Solomon, after whom the United Monarchy ended and was split into a separate Kingdom of Israel and a Kingdom of Judah. The Kingdom of Judah is described as comprising the tribes of Judah, Benjamin and partially, Levi. They later assimilated remnants of other tribes who migrated there from the northern Kingdom of Israel. In the extra-biblical record, the Israelites become visible as a people between 1200 and 1000 BCE. There is well accepted archeological evidence referring to "Israel" in the Merneptah Stele, which dates to about 1200 BCE, and in the Mesha stele from 840 BCE. It is debated whether a period like that of the Biblical judges occurred and if there ever was a United Monarchy. There is further disagreement about the earliest existence of the Kingdoms of Israel and Judah and their extent and power. Historians agree that a Kingdom of Israel existed by c. 900 BCE,: 169–95 there is a consensus that a Kingdom of Judah existed by c. 700 BCE at least, and recent excavations in Khirbet Qeiyafa have provided strong evidence for dating the Kingdom of Judah to the 10th century BCE. In 587 BCE, Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged Jerusalem, destroyed the First Temple and deported parts of the Judahite population. Scholars disagree regarding the extent to which the Bible should be accepted as a historical source for early Israelite history. Rendsburg states that there are two approximately equal groups of scholars who debate the historicity of the biblical narrative, the minimalists who largely reject it, and the maximalists who largely accept it, with the minimalists being the more vocal of the two. Some of the leading minimalists reframe the biblical account as constituting the Israelites' inspiring national myth narrative, suggesting that according to the modern archaeological and historical account, the Israelites and their culture did not overtake the region by force, but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic—and later monotheistic—religion of Yahwism centered on Yahweh, one of the gods of the Canaanite pantheon. The growth of Yahweh-centric belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting them apart from other Canaanites. According to Dever, modern archaeologists have largely discarded the search for evidence of the biblical narrative surrounding the patriarchs and the exodus. According to the maximalist position, the modern archaeological record independently points to a narrative which largely agrees with the biblical account. This narrative provides a testimony of the Israelites as a nomadic people known to the Egyptians as belonging to the Shasu. Over time these nomads left the desert and settled on the central mountain range of the land of Canaan, in simple semi-nomadic settlements in which pig bones are notably absent. This population gradually shifted from a tribal lifestyle to a monarchy. While the archaeological record of the ninth century BCE provides evidence for two monarchies, one in the south under a dynasty founded by a figure named David with its capital in Jerusalem, and one in the north under a dynasty founded by a figure named Omri with its capital in Samaria. It also points to an early monarchic period in which these regions shared material culture and religion, suggesting a common origin. Archaeological finds also provide evidence for the later cooperation of these two kingdoms in their coalition against Aram, and for their destructions by the Assyrians and later by the Babylonians. Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle East, and that they share certain genetic traits with other Gentile peoples of the Fertile Crescent. The genetic composition of different Jewish groups shows that Jews share a common gene pool dating back four millennia, as a marker of their common ancestral origin. Despite their long-term separation, Jewish communities maintained their unique commonalities, propensities, and sensibilities in culture, tradition, and language. History The earliest recorded evidence of a people by the name of Israel appears in the Merneptah Stele, which dates to around 1200 BCE. The majority of scholars agree that this text refers to the Israelites, a group that inhabited the central highlands of Canaan, where archaeological evidence shows that hundreds of small settlements were constructed between the 12th and 10th centuries BCE. The Israelites differentiated themselves from neighboring peoples through various distinct characteristics including religious practices, prohibition on intermarriage, and an emphasis on genealogy and family history. In the 10th century BCE, two neighboring Israelite kingdoms—the northern Kingdom of Israel and the southern Kingdom of Judah—emerged. Since their inception, they shared ethnic, cultural, linguistic and religious characteristics despite a complicated relationship. Israel, with its capital mostly in Samaria, was larger and wealthier, and soon developed into a regional power. In contrast, Judah, with its capital in Jerusalem, was less prosperous and covered a smaller, mostly mountainous territory. However, while in Israel the royal succession was often decided by a military coup d'état, resulting in several dynasty changes, political stability in Judah was much greater, as it was ruled by the House of David for the whole four centuries of its existence. Scholars also describe Biblical Jews as a 'proto-nation', in the modern nationalist sense, comparable to classical Greeks, the Gauls and the British Celts. Around 720 BCE, Kingdom of Israel was destroyed when it was conquered by the Neo-Assyrian Empire, which came to dominate the ancient Near East. Under the Assyrian resettlement policy, a significant portion of the northern Israelite population was exiled to Mesopotamia and replaced by immigrants from the same region. During the same period, and throughout the 7th century BCE, the Kingdom of Judah, now under Assyrian vassalage, experienced a period of prosperity and witnessed a significant population growth. This prosperity continued until the Neo-Assyrian king Sennacherib devastated the region of Judah in response to a rebellion in the area, ultimately halting at Jerusalem. Later in the same century, the Assyrians were defeated by the rising Neo-Babylonian Empire, and Judah became its vassal. In 587 BCE, following a revolt in Judah, the Babylonian king Nebuchadnezzar II besieged and destroyed Jerusalem and the First Temple, putting an end to the kingdom. The majority of Jerusalem's residents, including the kingdom's elite, were exiled to Babylon. According to the Book of Ezra, the Persian Cyrus the Great ended the Babylonian exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple circa 521–516 BCE. As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehud Medinata), with a smaller territory and a reduced population. Judea was under control of the Achaemenids until the fall of their empire in c. 333 BCE to Alexander the Great. After several centuries under foreign imperial rule, the Maccabean Revolt against the Seleucid Empire resulted in an independent Hasmonean kingdom, under which the Jews once again enjoyed political independence for a period spanning from 110 to 63 BCE. Under Hasmonean rule the boundaries of their kingdom were expanded to include not only the land of the historical kingdom of Judah, but also the Galilee and Transjordan. In the beginning of this process the Idumeans, who had infiltrated southern Judea after the destruction of the First Temple, were converted en masse. In 63 BCE, Judea was conquered by the Romans. From 37 BCE to 6 CE, the Romans allowed the Jews to maintain some degree of independence by installing the Herodian dynasty as vassal kings. However, Judea eventually came directly under Roman control and was incorporated into the Roman Empire as the province of Judaea. The Jewish–Roman wars, a series of failed uprisings against Roman rule during the first and second centuries CE, had profound and devastating consequences for the Jewish population of Judaea. The First Jewish–Roman War (66–73/74 CE) culminated in the destruction of Jerusalem and the Second Temple, after which the significantly diminished Jewish population was stripped of political autonomy. A few generations later, the Bar Kokhba revolt (132–136 CE) erupted in response to Roman plans to rebuild Jerusalem as a Roman colony, and, possibly, to restrictions on circumcision. Its violent suppression by the Romans led to the near-total depopulation of Judea, and the demographic and cultural center of Jewish life shifted to Galilee. Jews were subsequently banned from residing in Jerusalem and the surrounding area, and the province of Judaea was renamed Syria Palaestina. These developments effectively ended Jewish efforts to restore political sovereignty in the region for nearly two millennia. Similar upheavals impacted the Jewish communities in the empire's eastern provinces during the Diaspora Revolt (115–117 CE), leading to the near-total destruction of Jewish diaspora communities in Libya, Cyprus and Egypt, including the highly influential community in Alexandria. The destruction of the Second Temple in 70 CE brought profound changes to Judaism. With the Temple's central place in Jewish worship gone, religious practices shifted towards prayer, Torah study (including Oral Torah), and communal gatherings in synagogues. Judaism also lost much of its sectarian nature.: 69 Two of the three main sects that flourished during the late Second Temple period, namely the Sadducees and Essenes, eventually disappeared, while Pharisaic beliefs became the foundational, liturgical, and ritualistic basis of Rabbinic Judaism, which emerged as the prevailing form of Judaism since late antiquity. The Jewish diaspora existed well before the destruction of the Second Temple in 70 CE and had been ongoing for centuries, with the dispersal driven by both forced expulsions and voluntary migrations. In Mesopotamia, a testimony to the beginnings of the Jewish community can be found in Joachin's ration tablets, listing provisions allotted to the exiled Judean king and his family by Nebuchadnezzar II, and further evidence are the Al-Yahudu tablets, dated to the 6th–5th centuries BCE and related to the exiles from Judea arriving after the destruction of the First Temple, though there is ample evidence for the presence of Jews in Babylonia even from 626 BCE. In Egypt, the documents from Elephantine reveal the trials of a community founded by a Persian Jewish garrison at two fortresses on the frontier during the 5th–4th centuries BCE, and according to Josephus the Jewish community in Alexandria existed since the founding of the city in the 4th century BCE by Alexander the Great. By 200 BCE, there were well established Jewish communities both in Egypt and Mesopotamia ("Babylonia" in Jewish sources) and in the two centuries that followed, Jewish populations were also present in Asia Minor, Greece, Macedonia, Cyrene, and, beginning in the middle of the first century BCE, in the city of Rome. Later, in the first centuries CE, as a result of the Jewish-Roman Wars, a large number of Jews were taken as captives, sold into slavery, or compelled to flee from the regions affected by the wars, contributing to the formation and expansion of Jewish communities across the Roman Empire as well as in Arabia and Mesopotamia. After the Bar Kokhba revolt, the Jewish population in Judaea—now significantly reduced— made efforts to recover from the revolt's devastating effects, but never fully regained its former strength. Between the second and fourth centuries CE, the region of Galilee emerged as the primary center of Jewish life in Syria Palaestina, experiencing both demographic growth and cultural development. It was during this period that two central rabbinic texts, the Mishnah and the Jerusalem Talmud, were composed. The Romans recognized the patriarchs—rabbinic sages such as Judah ha-Nasi—as representatives of the Jewish people, granting them a certain degree of autonomy. However, as the Roman Empire gave way to the Christianized Byzantine Empire under Constantine, Jews began to face persecution by both the Church and imperial authorities, Jews came to be persecuted by the church and the authorities, and many immigrated to communities in the diaspora. By the fourth century CE, Jews are believed to have lost their demographic majority in Syria Palaestina. The long-established Jewish community of Mesopotamia, which had been living under Parthian and later Sasanian rule, beyond the confines of the Roman Empire, became an important center of Jewish study as Judea's Jewish population declined. Estimates often place the Babylonian Jewish community of the 3rd to 7th centuries at around one million, making it the largest Jewish diaspora community of that period. Under the political leadership of the exilarch, who was regarded as a royal heir of the House of David, this community had an autonomous status and served as a place of refuge for the Jews of Syria Palaestina. A number of significant Talmudic academies, such as the Nehardea, Pumbedita, and Sura academies, were established in Mesopotamia, and many important Amoraim were active there. The Babylonian Talmud, a centerpiece of Jewish religious law, was compiled in Babylonia in the 3rd to 6th centuries. Jewish diaspora communities are generally described to have coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (initially in the Rhineland and France), the Sephardim (initially in the Iberian Peninsula), and the Mizrahim (Middle East and North Africa). Romaniote Jews, Tunisian Jews, Yemenite Jews, Egyptian Jews, Ethiopian Jews, Bukharan Jews, Mountain Jews, and other groups also predated the arrival of the Sephardic diaspora. During the same period, Jewish communities in the Middle East thrived under Islamic rule, especially in cities like Baghdad, Cairo, and Damascus. In Babylonia, from the 7th to 11th centuries the Pumbedita and Sura academies led the Arab and to an extent the entire Jewish world. The deans and students of said academies defined the Geonic period in Jewish history. Following this period were the Rishonim who lived from the 11th to 15th centuries. Like their European counterparts, Jews in the Middle East and North Africa also faced periods of persecution and discriminatory policies, with the Almohad Caliphate in North Africa and Iberia issuing forced conversion decrees, causing Jews such as Maimonides to seek safety in other regions. Despite experiencing repeated waves of persecution, Ashkenazi Jews in Western Europe worked in a variety of fields, making an impact on their communities' economy and societies. In Francia, for example, figures like Isaac Judaeus and Armentarius occupied prominent social and economic positions. Francia also witnessed the development of a sophisticated tradition of biblical commentary, as exemplified by Rashi and the tosafists. In 1144, the first documented blood libel occurred in Norwich, England, marking an escalation in the pattern of discrimination and violence that Jews had already been subjected to throughout medieval Europe. During the 12th and 13th centuries, Jews faced frequent antisemitic legislation - including laws prescribing distinctive dress - alongside segregation, repeated blood libels, pogroms, and massacres such as the Rhineland Massacres (1066). The Jews of the Holy Roman Empire were designated Servi camerae regis (“servants of the imperial chamber”) by Frederick II, a status that afforded limited protection while simultaneously entangling them in the political struggles between the emperor and the German principalities and cities. Persecution intensified during the Black Death in the mid-14th century, when Jews were accused of poisoning wells and many communities were destroyed. These pressures, combined with major expulsions such as that from England in 1290, gradually pushed Ashkenazi Jewish populations eastward into Poland, Lithuania, and Russia. One of the largest Jewish communities of the Middle Ages was in the Iberian Peninsula, which for a time contained the largest Jewish population in Europe. Iberian Jewry endured discrimination under the Visigoths but saw its fortunes improve under Umayyad rule and later the Taifa kingdoms. During this period, the Jews of Muslim Spain entered a "Golden Age" marked by achievements in Hebrew poetry and literature, religious scholarship, grammar, medicine and science, with leading figures including Hasdai ibn Shaprut, Judah Halevi, Moses ibn Ezra and Solomon ibn Gabirol. Jews also rose to high office, most notably Samuel ibn Naghrillah, a scholar and poet who served as grand vizier and military commander of Granada. The Golden Age ended with the rise of the radical Almoravid and Almohad dynasties, whose persecutions drove many Jews from Iberia (including Maimonides), together with the advancing Reconquista. In 1391, widespread pogroms swept across Spain, leaving thousands dead and forcing mass conversions. The Spanish Inquisition was later established to pursue, torture and execute conversos who continued to practice Judaism in secret, while public disputations were staged to discredit Judaism. In 1492, after the Reconquista, Isabella I of Castile and Ferdinand II of Aragon decreed the expulsion of all Jews who refused conversion, sending an estimated 200,000 into exile in Portugal, Italy, North Africa, and the Ottoman Empire. In 1497, Portugal's Jews, about 30,000, were formally ordered expelled but instead were forcibly converted to retain their economic role. In 1498, some 3,500 Jews were expelled from Navarre. Many converts outwardly adopted Christianity while secretly preserving Jewish practices, becoming crypto-Jews (also known as marranos or anusim), who remained targets of the various Inquisitions for centuries. Following the expulsions from Spain and Portugal in the 1490s, Jewish exiles dispersed across the Mediterranean, Europe, and North Africa. Many settled in the Ottoman Empire—which, replacing the Iberian Peninsula, became home to the world's largest Jewish population—where new communities developed in Anatolia, the Balkans, and the Land of Israel. Cities such as Istanbul and Thessaloniki grew into major Jewish centers, while in 16th-century Safed a flourishing spiritual life took shape. There, Solomon Alkabetz, Moses Cordovero, and Isaac Luria developed influential new schools of Kabbalah, giving powerful impetus to Jewish mysticism, and Joseph Karo composed the Shulchan Aruch, which became a cornerstone of Jewish law. In the 17th century, Portuguese conversos who returned to Judaism and engaged in trade and banking helped establish Amsterdam as a prosperous Jewish center, while also forming communities in cities such as Antwerp and London. This period also witnessed waves of messianic fervor, most notably the rise of the Sabbatean movement in the 1660s, led by Sabbatai Zvi of İzmir, which reverberated throughout the Jewish world. In Eastern Europe, Poland–Lithuania became the principal center of Ashkenazi Jewry, eventually becoming home to the largest Jewish population in the world. Jewish life flourished there from in the early modern era, supported by relative stability, economic opportunity, and strong communal institutions. The mid-17th century brought devastation with the Cossack uprisings in Ukraine, which reversed migration flows and sent refugees westward, yet Poland–Lithuania remained the demographic and cultural heartland of Ashkenazic Jewry. Following the partitions of Poland, most of its Jews came under Russian rule and were confined to the "Pale of Settlement." The 18th century also witnessed new religious and intellectual currents. Hasidism, founded by Baal Shem Tov, emphasized mysticism and piety, while its opponents, the Misnagdim ("opponents") led by the Vilna Gaon, defended rabbinic scholarship and tradition. In Western Europe, during the 1760s and 1770s, the Haskalah (Jewish Enlightenment) emerged in German-speaking lands, where figures such as Moses Mendelssohn promoted secular learning, vernacular literacy, and integration into European society. Elsewhere, Jews began to be re-admitted to Western Europe, including England, where Menasseh ben Israel petitioned Oliver Cromwell for their return. In the Americas, Jews of Sephardic descent first arrived as conversos in Spanish and Portuguese colonies, where many faced trial by Inquisition tribunals for "judaizing." A more durable presence began in Dutch Brazil, where Jews openly practiced their religion and established the first synagogues in the New World, before the Portuguese reconquest forced their dispersal to Amsterdam, the Caribbean, and North America. Sephardic communities took root in Curaçao, Suriname, Jamaica, and Barbados, later joined by Ashkenazi migrants. In North America, Jews were present from the mid-17th century, with New Amsterdam hosting the first organized congregation in 1654. By the time of the American Revolution, small communities in New York, Newport, Philadelphia, Savannah, and Charleston played an active role in the struggle for independence. In the late 19th century, Jews in Western Europe gradually achieved legal emancipation, though social acceptance remained limited by persistent antisemitism and rising nationalism. In Eastern Europe, particularly within the Russian Empire's Pale of Settlement, Jews faced mounting legal restrictions and recurring pogroms. From this environment emerged Zionism, a national revival movement originating in Central and Eastern Europe that sought to re-establish a Jewish polity in the Land of Israel as a means of returning the Jewish people to their ancestral homeland and ending centuries of exile and persecution. This led to waves of Jewish migration to Ottoman-controlled Palestine. Theodor Herzl, who is considered the father of political Zionism, offered his vision of a future Jewish state in his 1896 book Der Judenstaat (The Jewish State); a year later, he presided over the First Zionist Congress. The antisemitism that inflicted Jewish communities in Europe also triggered a mass exodus of 2.8 million Jews to the United States between 1881 and 1924. Despite this, some Jews of Europe and the United States were able to make great achievements in various fields of science and culture. Among the most influential from this period are Albert Einstein in physics, Sigmund Freud in psychology, Franz Kafka in literature, and Irving Berlin in music. Many Nobel Prize winners at this time were Jewish, as is still the case. When Adolf Hitler and the Nazi Party came to power in Germany in 1933, the situation for Jews deteriorated rapidly as a direct result of Nazi policies. Many Jews fled from Europe to Mandatory Palestine, the United States, and the Soviet Union as a result of racial anti-Semitic laws, economic difficulties, and the fear of an impending war. World War II started in 1939, and by 1941, Hitler occupied almost all of Europe. Following the German invasion of the Soviet Union in 1941, the Final Solution—an extensive, organized effort with an unprecedented scope intended to annihilate the Jewish people—began, and resulted in the persecution and murder of Jews in Europe and North Africa. In Poland, three million were murdered in gas chambers in all concentration camps combined, with one million at the Auschwitz camp complex alone. The Holocaust is the name given to this genocide, in which six million Jews in total were systematically murdered. Before and during the Holocaust, enormous numbers of Jews immigrated to Mandatory Palestine. In 1944, the Jewish insurgency in Mandatory Palestine began with the aim of gaining full independence from the United Kingdom. On 14 May 1948, upon the termination of the mandate, David Ben-Gurion declared the creation of the State of Israel, a Jewish and democratic state. Immediately afterwards, all neighboring Arab states invaded, and were resisted by the newly formed Israel Defense Forces. In 1949, the war ended and Israel started building its state and absorbing waves of Aliyah, granting citizenship to Jews all over the world via the Law of Return passed in 1950. However, both the Israeli–Palestinian conflict and wider Arab–Israeli conflict continue to this day. Culture The Jewish people and the religion of Judaism are strongly interrelated. Converts to Judaism have a status within the Jewish people equal to those born into it. However, converts who go on to practice no Judaism are likely to be viewed with skepticism. Mainstream Judaism does not proselytize, and conversion is considered a difficult task. A significant portion of conversions are undertaken by children of mixed marriages, or would-be or current spouses of Jews. The Hebrew Bible, a religious interpretation of the traditions and early history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54 percent of the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some sense characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, and still others from the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"), the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities of Asoristan, known to Jews as Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. Dialects of these same languages were also used by the Jews of Syria Palaestina at that time.[citation needed] For centuries, Jews worldwide have spoken the local or dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that became independent languages. Yiddish is the Judaeo-German language developed by Ashkenazi Jews who migrated to Central Europe. Ladino is the Judaeo-Spanish language developed by Sephardic Jews who migrated to the Iberian Peninsula. Due to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish languages of several communities, including Judaeo-Georgian, Judaeo-Arabic, Judaeo-Berber, Krymchak, Judaeo-Malayalam and many others, have largely fallen out of use. For over sixteen centuries Hebrew was used almost exclusively as a liturgical language, and as the language in which most books had been written on Judaism, with a few speaking only Hebrew on the Sabbath. Hebrew was revived as a spoken language by Eliezer ben Yehuda, who arrived in Palestine in 1881. It had not been used as a mother tongue since Tannaic times. Modern Hebrew is designated as the "State language" of Israel. Despite efforts to revive Hebrew as the national language of the Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century, most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language, but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and the State of Israel. In some places, the mother language of the Jewish community differs from that of the general population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews, but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan,[better source needed] as well as for Ashkenazic Jews in Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and the city of Tunis, while most North Africans continue to use Arabic or Berber as their mother tongue.[citation needed] There is no single governing body for the Jewish community, nor a single authority with responsibility for religious doctrine. Instead, a variety of secular and religious institutions at the local, national, and international levels lead various parts of the Jewish community on a variety of issues. Today, many countries have a Chief Rabbi who serves as a representative of that country's Jewry. Although many Hasidic Jews follow a certain hereditary Hasidic dynasty, there is no one commonly accepted leader of all Hasidic Jews. Many Jews believe that the Messiah will act a unifying leader for Jews and the entire world. A number of modern scholars of nationalism support the existence of Jewish national identity in antiquity. One of them is David Goodblatt, who generally believes in the existence of nationalism before the modern period. In his view, the Bible, the parabiblical literature and the Jewish national history provide the base for a Jewish collective identity. Although many of the ancient Jews were illiterate (as were their neighbors), their national narrative was reinforced through public readings. The Hebrew language also constructed and preserved national identity. Although it was not widely spoken after the 5th century BCE, Goodblatt states: the mere presence of the language in spoken or written form could invoke the concept of a Jewish national identity. Even if one knew no Hebrew or was illiterate, one could recognize that a group of signs was in Hebrew script. ... It was the language of the Israelite ancestors, the national literature, and the national religion. As such it was inseparable from the national identity. Indeed its mere presence in visual or aural medium could invoke that identity. Anthony D. Smith, an historical sociologist considered one of the founders of the field of nationalism studies, wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation [...] than perhaps anywhere else in the ancient world." He adds that this observation "must make us wary of pronouncing too readily against the possibility of the nation, and even a form of religious nationalism, before the onset of modernity." Agreeing with Smith, Goodblatt suggests omitting the qualifier "religious" from Smith's definition of ancient Jewish nationalism, noting that, according to Smith, a religious component in national memories and culture is common even in the modern era. This view is echoed by political scientist Tom Garvin, who writes that "something strangely like modern nationalism is documented for many peoples in medieval times and in classical times as well," citing the ancient Jews as one of several "obvious examples", alongside the classical Greeks and the Gaulish and British Celts. Fergus Millar suggests that the sources of Jewish national identity and their early nationalist movements in the first and second centuries CE included several key elements: the Bible as both a national history and legal source, the Hebrew language as a national language, a system of law, and social institutions such as schools, synagogues, and Sabbath worship. Adrian Hastings argued that Jews are the "true proto-nation", that through the model of ancient Israel found in the Hebrew Bible, provided the world with the original concept of nationhood which later influenced Christian nations. However, following Jerusalem's destruction in the first century CE, Jews ceased to be a political entity and did not resemble a traditional nation-state for almost two millennia. Despite this, they maintained their national identity through collective memory, religion and sacred texts, even without land or political power, and remained a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of Israel. Steven Weitzman suggests that Jewish nationalist sentiment in antiquity was encouraged because under foreign rule (Persians, Greeks, Romans) Jews were able to claim that they were an ancient nation. This claim was based on the preservation and reverence of their scriptures, the Hebrew language, the Temple and priesthood, and other traditions of their ancestors. Doron Mendels further observes that the Hasmonean kingdom, one of the few examples of indigenous statehood at its time, significantly reinforced Jewish national consciousness. The memory of this period of independence contributed to the persistent efforts to revive Jewish sovereignty in Judea, leading to the major revolts against Roman rule in the 1st and 2nd centuries CE. Demographics Within the world's Jewish population there are distinct ethnic divisions, most of which are primarily the result of geographic branching from an originating Israelite population, and subsequent independent evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World, often at great distances from one another, resulting in effective and often long-term isolation. During the millennia of the Jewish diaspora the communities would develop under the influence of their local environments: political, cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim are so named in reference to their geographical origins (their ancestors' culture coalesced in the Rhineland, an area historically referred to by Jews as Ashkenaz). Similarly, Sephardim (Sefarad meaning "Spain" in Hebrew) are named in reference their origins in Iberia. The diverse groups of Jews of the Middle East and North Africa are often collectively referred to as Sephardim together with Sephardim proper for liturgical reasons having to do with their prayer rites. A common term for many of these non-Spanish Jews who are sometimes still broadly grouped as Sephardim is Mizrahim (lit. 'easterners' in Hebrew). Nevertheless, Mizrahis and Sepharadim are usually ethnically distinct. Smaller groups include, but are not restricted to, Indian Jews such as the Bene Israel, Bnei Menashe, Cochin Jews, and Bene Ephraim; the Romaniotes of Greece; the Italian Jews ("Italkim" or "Bené Roma"); the Teimanim from Yemen; various African Jews, including most numerously the Beta Israel of Ethiopia; and Chinese Jews, most notably the Kaifeng Jews, as well as various other distinct but now almost extinct communities. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups. In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish Jews, Moroccan Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews, Afghan Jews, and various others. The Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent the bulk of modern Jewry, with at least 70 percent of Jews worldwide (and up to 90 percent prior to World War II and the Holocaust). As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish population representative of all groups, a melting pot independent of each group's proportion within the overall world Jewish population. Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the French Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40 percent of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." However, a 2025 genetic study on the Ashkenazi Jewish founder population supports the presence of a substantial Near Eastern component in the maternal lineages. Analyses of mitochondrial DNA (mtDNA) indicate that the core founder lineages, estimated at around 54, likely originated from the Near East, with these founder signatures appearing in multiple copies across the population. While later admixture introduced additional mtDNA lineages, these absorbed lineages are distinguishable from the original founders. The findings are consistent with genome-wide Identity-by-Descent and Lineage Extinction analyses, reinforcing the Near Eastern origin of the Ashkenazi maternal founders. A study showed that 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found in Pashtuns and on lower scales all major Jewish groups, Palestinians, Syrians, and Lebanese. Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardic, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly Southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations. Behar et al. have remarked on a close relationship between Ashkenazi Jews and modern Italians. A 2001 study found that Jews were more closely related to groups of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors, whose genetic signature was found in geographic patterns reflective of Islamic conquests. The studies also show that Sephardic Bnei Anusim (descendants of the "anusim" who were forced to convert to Catholicism), which comprise up to 19.8 percent of the population of today's Iberia (Spain and Portugal) and at least 10 percent of the population of Ibero-America (Hispanic America and Brazil), have Sephardic Jewish ancestry within the last few centuries. The Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries, have also been thought to have some more remote ancient Jewish ancestry. Views on the Lemba have changed and genetic Y-DNA analyses in the 2000s have established a partially Middle-Eastern origin for a portion of the male Lemba population but have been unable to narrow this down further. Although historically, Jews have been found all over the world, in the decades since World War II and the establishment of Israel, they have increasingly concentrated in a small number of countries. In 2021, Israel and the United States together accounted for over 85 percent of the global Jewish population, with approximately 45.3% and 39.6% of the world's Jews, respectively. More than half (51.2%) of world Jewry resides in just ten metropolitan areas. As of 2021, these ten areas were Tel Aviv, New York, Jerusalem, Haifa, Los Angeles, Miami, Philadelphia, Paris, Washington, and Chicago. The Tel Aviv metro area has the highest percent of Jews among the total population (94.8%), followed by Jerusalem (72.3%), Haifa (73.1%), and Beersheba (60.4%), the balance mostly being Israeli Arabs. Outside Israel, the highest percent of Jews in a metropolitan area was in New York (10.8%), followed by Miami (8.7%), Philadelphia (6.8%), San Francisco (5.1%), Washington (4.7%), Los Angeles (4.7%), Toronto (4.5%), and Baltimore (4.1%). As of 2010, there were nearly 14 million Jews around the world, roughly 0.2% of the world's population at the time. According to the 2007 estimates of The Jewish People Policy Planning Institute, the world's Jewish population is 13.2 million. This statistic incorporates both practicing Jews affiliated with synagogues and the Jewish community, and approximately 4.5 million unaffiliated and secular Jews.[citation needed] According to Sergio Della Pergola, a demographer of the Jewish population, in 2021 there were about 6.8 million Jews in Israel, 6 million in the United States, and 2.3 million in the rest of the world. Israel, the Jewish nation-state, is the only country in which Jews make up a majority of the citizens. Israel was established as an independent democratic and Jewish state on 14 May 1948. Of the 120 members in its parliament, the Knesset, as of 2016[update], 14 members of the Knesset are Arab citizens of Israel (not including the Druze), most representing Arab political parties. One of Israel's Supreme Court judges is also an Arab citizen of Israel. Between 1948 and 1958, the Jewish population rose from 800,000 to two million. Currently, Jews account for 75.4 percent of the Israeli population, or 6 million people. The early years of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe, Latin America, and North America. A trickle of immigrants from other communities has also arrived, including Indian Jews and others, as well as some descendants of Ashkenazi Holocaust survivors who had settled in countries such as the United States, Argentina, Australia, Chile, and South Africa. Some Jews have emigrated from Israel elsewhere, because of economic problems or disillusionment with political conditions and the continuing Arab–Israeli conflict. Jewish Israeli emigrants are known as yordim. The waves of immigration to the United States and elsewhere at the turn of the 19th century, the founding of Zionism and later events, including pogroms in Imperial Russia (mostly within the Pale of Settlement in present-day Ukraine, Moldova, Belarus and eastern Poland), the massacre of European Jewry during the Holocaust, and the founding of the state of Israel, with the subsequent Jewish exodus from Arab lands, all resulted in substantial shifts in the population centers of world Jewry by the end of the 20th century. More than half of the Jews live in the Diaspora (see Population table). Currently, the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world, is located in the United States, with 6 million to 7.5 million Jews by various estimates. Elsewhere in the Americas, there are also large Jewish populations in Canada (315,000), Argentina (180,000–300,000), and Brazil (196,000–600,000), and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of the Jews in Latin America). According to a 2010 Pew Research Center study, about 470,000 people of Jewish heritage live in Latin America and the Caribbean. Demographers disagree on whether the United States has a larger Jewish population than Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while others maintain that the United States still has the largest Jewish population in the world. Currently, a major national Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North African countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish community of 292,000. In Eastern Europe, the exact figures are difficult to establish. The number of Jews in Russia varies widely according to whether a source uses census data (which requires a person to choose a single nationality among choices that include "Russian" and "Jewish") or eligibility for immigration to Israel (which requires that a person have one or more Jewish grandparents). According to the latter criteria, the heads of the Russian Jewish community assert that up to 1.5 million Russians are eligible for aliyah. In Germany, the 102,000 Jews registered with the Jewish community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region, 15 to 20 percent in the Kingdom of Iraq, approximately 10 percent in the Kingdom of Egypt and approximately 7 percent in the Kingdom of Yemen. A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Muslim-majority countries, mainly in Turkey (14,200) and Iran (9,100), while Morocco (2,000), Tunisia (1,000), and the United Arab Emirates (500) host the largest communities in the Arab world. A small-scale exodus had begun in many countries in the early decades of the 20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily in Iraq, Yemen and Libya, with up to 90 percent of these communities leaving within a few years. The peak of the exodus from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80 percent of Iranian Jews left the country.[citation needed] Outside Europe, the Americas, the Middle East, and the rest of Asia, there are significant Jewish populations in Australia (112,500) and South Africa (70,000). There is also a 6,800-strong community in New Zealand. Since at least the time of the Ancient Greeks, a proportion of Jews have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is just under 50 percent; in the United Kingdom, around 53 percent; in France, around 30 percent; and in Australia and Mexico, as low as 10 percent. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice. The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations as Jews continue to assimilate into the countries in which they live.[citation needed] The Jewish people and Judaism have experienced various persecutions throughout their history. During Late Antiquity and the Early Middle Ages, the Roman Empire (in its later phases known as the Byzantine Empire) repeatedly repressed the Jewish population, first by ejecting them from their homelands during the pagan Roman era and later by officially establishing them as second-class citizens during the Christian Roman era. According to James Carroll, "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million." Later in medieval Western Europe, further persecutions of Jews by Christians occurred, notably during the Crusades—when Jews all over Germany were massacred—and in a series of expulsions from the Kingdom of England, Germany, and France. Then there occurred the largest expulsion of all, when Spain and Portugal, after the Reconquista (the Catholic Reconquest of the Iberian Peninsula), expelled both unbaptized Sephardic Jews and the ruling Muslim Moors. In the Papal States, which existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. Islam and Judaism have a complex relationship. Traditionally Jews and Christians living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad; its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic of Iran, and even in the newspapers and other publications of Turkish Refah Partisi."[better source needed] Throughout history, many rulers, empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews; the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8 percent of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 16 million Jews in 1939, almost 40% were murdered in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups of Europe during World War II by Germany and its collaborators—remains the most notable modern-day persecution of Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and Roma were crammed into ghettos before being transported hundreds of kilometres by freight train to extermination camps where, if they survived the journey, the majority of them were murdered in gas chambers. Virtually every arm of Germany's bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar has called "a genocidal nation." Throughout Jewish history, Jews have repeatedly been directly or indirectly expelled from both their original homeland, the Land of Israel, and many of the areas in which they have settled. This experience as refugees has shaped Jewish identity and religious practice in many ways, and is thus a major element of Jewish history. In summary, the pogroms in Eastern Europe, the rise of modern antisemitism, the Holocaust, as well as the rise of Arab nationalism, all served to fuel the movements and migrations of huge segments of Jewry from land to land and continent to continent until they arrived back in large numbers at their original historical homeland in Israel. In the Bible, the patriarch Abraham is described as a migrant to the land of Canaan from Ur of the Chaldees. His descendants, the Children of Israel, undertook the Exodus (meaning "departure" or "exit" in Greek) from ancient Egypt, as described in the Book of Exodus. The first movement documented in the historical record occurred with the resettlement policy of the Neo-Assyrian Empire, which mandated the deportation of conquered peoples, and it is estimated some 4,500,000 among its captive populations suffered this dislocation over three centuries of Assyrian rule. With regard to Israel, Tiglath-Pileser III claims he deported 80% of the population of Lower Galilee, some 13,520 people. Some 27,000 Israelites, 20 to 25% of the population of the Kingdom of Israel, were described as being deported by Sargon II, and were replaced by other deported populations and sent into permanent exile by Assyria, initially to the Upper Mesopotamian provinces of the Assyrian Empire. Between 10,000 and 80,000 people from the Kingdom of Judah were similarly exiled by Babylonia, but these people were then returned to Judea by Cyrus the Great of the Persian Achaemenid Empire. Many Jews were exiled again by the Roman Empire. The 2,000 year dispersion of the Jewish diaspora beginning under the Roman Empire, as Jews were spread throughout the Roman world and, driven from land to land, settled wherever they could live freely enough to practice their religion. Over the course of the diaspora the center of Jewish life moved from Babylonia to the Iberian Peninsula to Poland to the United States and, as a result of Zionism, back to Israel. There were also many expulsions of Jews during the Middle Ages and Enlightenment in Europe, including: 1290, 16,000 Jews were expelled from England, (see the Statute of Jewry); in 1396, 100,000 from France; in 1421, thousands were expelled from Austria. Many of these Jews settled in East-Central Europe, especially Poland. Following the Spanish Inquisition in 1492, the Spanish population of around 200,000 Sephardic Jews were expelled by the Spanish crown and Catholic church, followed by expulsions in 1493 in Sicily (37,000 Jews) and Portugal in 1496. The expelled Jews fled mainly to the Ottoman Empire, the Netherlands, and North Africa, others migrating to Southern Europe and the Middle East. During the 19th century, France's policies of equal citizenship regardless of religion led to the immigration of Jews (especially from Eastern and Central Europe). This contributed to the arrival of millions of Jews in the New World. Over two million Eastern European Jews arrived in the United States from 1880 to 1925. In the latest phase of migrations, the Islamic Revolution of Iran caused many Iranian Jews to flee Iran. Most found refuge in the US (particularly Los Angeles, California, and Long Island, New York) and Israel. Smaller communities of Persian Jews exist in Canada and Western Europe. Similarly, when the Soviet Union collapsed, many of the Jews in the affected territory (who had been refuseniks) were suddenly allowed to leave. This produced a wave of migration to Israel in the early 1990s. Israel is the only country with a Jewish population that is consistently growing through natural population growth, although the Jewish populations of other countries, in Europe and North America, have recently increased through immigration. In the Diaspora, in almost every country the Jewish population in general is either declining or steady, but Orthodox and Haredi Jewish communities, whose members often shun birth control for religious reasons, have experienced rapid population growth. Orthodox and Conservative Judaism discourage proselytism to non-Jews, but many Jewish groups have tried to reach out to the assimilated Jewish communities of the Diaspora in order for them to reconnect to their Jewish roots. Additionally, while in principle Reform Judaism favours seeking new members for the faith, this position has not translated into active proselytism, instead taking the form of an effort to reach out to non-Jewish spouses of intermarried couples. There is also a trend of Orthodox movements reaching out to secular Jews in order to give them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and other Jewish groups over the past 25 years, there has been a trend (known as the Baal teshuva movement) for secular Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally, there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction of becoming Jews. Contributions Jewish individuals have played a significant role in the development and growth of Western culture, advancing many fields of thought, science and technology, both historically and in modern times, including through discrete trends in Jewish philosophy, Jewish ethics and Jewish literature, as well as specific trends in Jewish culture, including in Jewish art, Jewish music, Jewish humor, Jewish theatre, Jewish cuisine and Jewish medicine. Jews have established various Jewish political movements, religious movements, and, through the authorship of the Hebrew Bible and parts of the New Testament, provided the foundation for Christianity and Islam. More than 20 percent of the awarded Nobel Prize have gone to individuals of Jewish descent. Philanthropic giving is a widespread core function among Jewish organizations. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Retro_Gamer] | [TOKENS: 834] |
Contents Retro Gamer Retro Gamer is a British magazine, published worldwide, covering retro video games. It was the first commercial magazine to be devoted entirely to the subject. Launched in January 2004[clarification needed] as a quarterly publication, Retro Gamer soon became a monthly. In 2005, a general decline in gaming and computer magazine readership led to the closure of its publishers, Live Publishing, and the rights to the magazine were later purchased by Imagine Publishing. It was taken over by Future plc on 21 October 2016, following Future's acquisition of Imagine Publishing. History The first 18 issues of the magazine came with a coverdisk. It usually contained freeware remakes of retro video games and emulators, but also videos and free commercial PC software such as The Games Factory and The Elder Scrolls: Arena. Some issues had themed CDs containing the entire back catalogue of a publisher, such as Durell, Llamasoft and Gremlin Graphics. On 27 September 2005, the magazine's original publishing company, Live Publishing, went into bankruptcy. The magazine's official online forums described the magazine as "finished" shortly before issue #19 was due for release. However, rights to Retro Gamer were purchased by Imagine Publishing in October 2005 and the magazine was re-launched on 8 December 2005. Retro Survival is a commercial CD retro games magazine put together by the freelance writers of Retro Gamer when Live Publishing collapsed. The CD was published in November 2005 and contains articles that would have appeared in Issue 19 of Retro Gamer, as well as several extras including a foreword by celebrity games journalist Mr Biffo. In June 2004, a tribute to Zzap!64 was included, "The DEF Tribute to Zzap!64", celebrating the 20th anniversary of the Commodore 64 focused magazine. It also includes interviews with leading 80s and 90s programmers, such as David Crane, Matthew Smith and Archer Maclean. Regular columns also feature such as Back to the 80s and 90s, Desert Island Disks (what games would a gaming celebrity take to a desert island) and From the Archives (a profile of a particular game developer or publisher). The 'Making Of's' is a recurring feature in which well-known developers are interviewed about the creation and design process behind their games. Classic titles covered in past issues have included Breakout (Steve Wozniak), Dungeon Master (Doug Bell), Smash TV (Eugene Jarvis), Starfox (Jez San), Rescue on Fractalus! (David Fox/Charlie Kellner), Prince of Persia (Jordan Mechner), Berzerk (Alan McNeil), The Hitchhiker's Guide to the Galaxy (Steve Meretzky), Crystal Castles (Franz X. Lanzinger), Tetris (Alexey Pajitnov), Sheep in Space (Jeff Minter) Out Run (Yu Suzuki) and Splat! (Ian Andrew). Issue 48 (February 2008) contained an exclusive interview with Manic Miner creator Matthew Smith, written by freelancer Paul Drury after a visit to Smith's family home in Liverpool. March 2010 (issue 75) saw John Romero collaborating with Retro Gamer, taking on the role of 'Guest Editor', taking charge of the magazine's editorial and splashing his own unique style to a number of his favorite articles and subjects throughout the magazine. The magazine celebrated its 200th issue in October 2019 and as of March 2023, the staff consists of Editor Darran Jones, Production Editor Tim Empey, Features Editor Nick Thorpe and Art Editor Andy Salter. The magazine posts its own issue preview videos on its YouTube channel, featuring editor Darran Jones and Production Editor Drew Sleep as hosts. Digital version Three DVDs with 25 to 30 issues each have been released over the years: Retro Gamer is now available as an iOS app and can be downloaded onto iPhone and iPad. Awards Retro Gamer won Best Magazine at the 2010 Games Media Awards. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/1990s_United_States_boom] | [TOKENS: 1402] |
Contents 1990s United States boom The 1990s economic boom in the United States was a major economic expansion that lasted between 1993 and 2001, coinciding with the economic policies of the Clinton administration. It began following the early 1990s recession during the presidency of George H.W. Bush and ended following the infamous dot-com crash in 2000. Until July 2019, it was the longest recorded economic expansion in the history of the United States. Background The 1990s are remembered as a time of strong economic growth, steady job creation, low inflation, rising productivity, economic boom, and a surging stock market that resulted from a combination of rapid technological changes and sound central monetary policy. The prosperity of the 1990s was not evenly distributed over the entire decade. The economy was in recession from July 1990 - March 1991, having suffered the S&L Crisis in 1989, a spike in gas prices as the result of the Gulf War, and the general run of the business cycle since 1983. A surge in inflation in 1988 and 1989 forced the Federal Reserve to raise the discount rate to 8.00% in early 1990, restricting credit into the already-weakening economy. GDP growth and job creation remained weak through late-1992. Unemployment rose from 5.4% in January 1990 to 6.8% in March 1991, and continued to rise until peaking at 7.8% in June 1992. Approximately 1.621 million jobs were shed during the recession. As inflation subsided drastically, the Federal Reserve cut interest rates to a then-record low of 3.00% to promote growth. For the first time since the Great Depression, the economy underwent a "jobless recovery," where GDP growth and corporate earnings returned to normal levels while job creation lagged, demonstrating the importance of the financial and service sectors in the national economy, having surpassed the manufacturing sector in the 1980s. Politically, the stagnant economy would doom President George H. W. Bush in the 1992 election, as Bill Clinton capitalized on economic frustration and voter fatigue after 12 years of Republican stewardship of the White House. Unemployment remained above 7% until July 1993, and above 6% until September 1994. It was in the spring of 1994 where GDP growth surged and the number of jobs created (3.85 million) set a record that has yet to be surpassed as of 2015. But 1995 would bring a pause in economic growth, primarily because the Federal Reserve raised interest rates from 3% to 6% beginning in late 1994 to prevent inflation from rising after such rapid growth along with two government shutdowns that slowed the economy. The pause was short-lived, however, as the economy adjusted and the surge of investment in the Dot-Com bubble would jumpstart the economy beginning in late 1995. 1996 saw a return to steady growth, and in May 1997 unemployment fell below 5% for the first time since December 1973. This prosperity, combined with the Omnibus Budget Reconciliation Act of 1990 and Omnibus Budget Reconciliation Act of 1993 (which raised taxes), and the Balanced Budget Act of 1997 (which cut spending), allowed the federal government to go from a $290 billion deficit in 1992 to a record $236.4 billion surplus in 2000. The reduction in government borrowing freed up capital in markets for businesses and consumers, causing interest rates on loans to fall creating a cycle that only reinforced growth. Government debt increased from $5.02 trillion in 1990 to $5.413 in 1997 and flatlined, barely increasing to $5.674 in 2000. 1995–2000 is also remembered for a series of global economic financial crises that threatened the U.S. economy: Mexico in 1995, Asia in 1997, Russia in 1998, and Argentina in 1999. Despite occasional stock market downturns and some distortions in the trade deficit, the US economy remained resilient until the dot-com bubble peaked in March 2000, after which was a recession a year later. The Federal Reserve had a hand in propping up the US economy by lowering interest rates to 4.75% by November 1998 to flood the world financial markets with dollars and prevent a global economic crisis as well as to restore confidence within the American economy which panicked during the height of the Asian financial crisis in 1997. The easing of credit also coincided with spectacular stock market run-ups from 1999 to 2000. The NASDAQ, at less than 800 points in 1994, surged to over 5,000 in March 2000. The Dow Jones Industrial Index traded at roughly 3,000 points in 1990 and 4,000 in 1995, nearly tripled to over 11,000 by mid-2000.[citation needed] Proposed reasons for the boom Possible reasons for the economic boom: None of these rationales for the 1990s economic boom should be seen as mutually exclusive. End of the boom Despite the concerns, it was during this time that talk of a "New Economy" emerged, where inflation and unemployment were low and strong growth coincided. Some even spoke of the end of the business cycle, where economic growth was perpetual. In April 2000, unemployment dropped to 3.8%, and was below 4% September–December 2000. For the whole 1990-2000 period, roughly 23,672,000 jobs were created. Hourly wages had increased by a strong 10.1% since 1996. But by the fall, the economy began to run out of steam. The Federal Reserve hiked rates to 6.5% in May 2000, and it appeared by late-2000 that the business cycle was not eliminated, but was coming to a crest. Growth faltered, job creation slowed, the stock markets plunged, and the groundwork for the 2001 recession was being laid, thus ending the economic boom of the 1990s.[citation needed] Legacy According to the National Bureau of Economic Research, the 1990s was the longest economic expansion in the history of the United States until the 2009–2020 expansion, lasting exactly ten years from March 1991 to March 2001. It was the best performance on all accounts since the 1961–1969 period. The importance and influence of the financial sector only grew, as demonstrated by the bursting of the Dot-Com Bubble in 2000 followed by a recession in 2001. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-24] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mindflex] | [TOKENS: 227] |
Contents Mindflex Mindflex is a toy by Mattel by which, according to its description, the operator uses their brain waves to steer a ball through an obstacle course. Brain waves are registered by the enclosed EEG headset, which allows the user to control an air stream by concentrating, thus lifting or lowering a foam ball that is trapped in the airflow due to the Coandă effect. The game was released in the fall of 2009, and uses the same microchip as the MindSet from NeuroSky and homebuilt EEG machines. Controversy Despite the science behind the technology developed by Mattel, outside scientists have questioned whether the toy actually measures brain waves or just randomly moves the ball, exploiting the well-known illusion of control. However, despite the John-Dylan Haynes experiments, supporters of the game stand behind the research that went into the development of Mindflex, and believe that the headset does indeed read EEGs. See also References External links This toy-related article is a stub. You can help Wikipedia by adding missing information. This computational neuroscience-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Giphy] | [TOKENS: 1146] |
Contents Giphy Giphy (/ˈɡɪfi/, GHIF-ee), styled as GIPHY, is an American online database and search engine that allows users to search for and share animated GIF files. History Giphy was founded by Alex Chung and Jace Cooke in February 2013. The idea for the business came when the pair was having breakfast, musing on the rising trend of purely visual communication. When Chung and Cooke first launched Giphy, the website functioned solely as a search engine for GIFs. According to Chung, Giphy attracted around a million users during its first week and the figure leveled out to 300,000. Giphy features what its founders called "conversational search" wherein contents are brought to users' conversations through a search box found in their messaging applications. In August 2013, Giphy expanded beyond a search engine to allow users to post, embed and share GIFs on Facebook. Giphy was then recognized as a Top 100 Website of 2013, according to PC Magazine. Three months later, Giphy integrated with Twitter to enable users to share GIFs by simply sharing a GIF's URL. In May 2014, Giphy raised $2.4 million in a Series A funding round from investors, including Quire, CAA Ventures, RRE Ventures, Lerer Hippeau Ventures, and Betaworks. In March 2015, Giphy acquired Nutmeg, a GIF messaging service, as one of the company's first major steps towards the mobile industry. This coincided with the launch of Facebook Messenger's own development platform, in which Giphy joined a few exclusive apps in its debut. In August 2015, Giphy launched its second mobile app, GIPHY Cam, which allows users to create and share GIFs on a social network. In February 2016, Giphy raised $55 million in funding at a $300 million valuation. In October 2016, Giphy announced several statistics, including the statement that it had 100 million daily active users, that it served over 1 billion GIFs per day, and that visitors watched more than 2 million hours of GIF content every day. In July 2017, Giphy announced that it had 200 million daily active users between both the API and website, with around 250 million monthly active users on the website. Chung announced in a February 2019 New York event that Giphy was exploring an advertising scheme that is distinguished from the Google model, which shows ads according to users' search histories. The idea is to embed advertising in private messages. Giphy is seeking to take advantage of this landscape since the GIG database has been integrated into most messaging services. In May 2020, it was announced that Giphy had agreed to be acquired by Facebook Inc. (now Meta Platforms), with a reported purchase price of $400 million. Facebook services had accounted for roughly half of Giphy's overall traffic. Giphy was to be integrated with the staff of Facebook subsidiary Instagram, although Facebook stated that there would be no immediate changes to the service. Facebook discontinued Giphy's display advertising program upon the purchase. The acquisition faced scrutiny due to recent privacy scandals surrounding Facebook. The United Kingdom's Competition and Markets Authority (CMA) argued that the deal was potentially anti-competitive and began a probe. In June 2020, the CMA issued an enforcement order prohibiting Giphy from being fully integrated into Facebook, pending a future ruling. In August 2021, the CMA issued preliminary findings, arguing that there was the risk that Facebook could pull Giphy's services from competitors, or require them to provide more user data as a condition of service. It also showed concerns over the market share of Facebook's advertising services. On November 30, 2021, the CMA ruled that Meta would be required to divest Giphy. On July 18, 2022, the Competition Appeal Tribunal ordered the CMA to re-evaluate its decision on procedural grounds, as it "failed to properly consult" and "wrongly excised portions" of its decision. It otherwise upheld most of the CMA's original decision. On October 18, 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Shutterstock announced it would buy Giphy from Meta Platforms Inc for $53 million in cash. The acquisition was completed on June 23, 2023. Partnerships Giphy partners with brands to host GIFs that can be shared as marketing promotions via social media channels. The company also created artist profiles on the website, which allow GIFs to be attributed to the artist(s) who created them. In September 2014, Giphy partnered with Line to host the inaugural sticker design competition. LINE and GIPHY enlisted a team of digital partners, including Tumblr, Fox ADHD, Frederator, Cut & Paste, New Museum, Eyebeam, Rhizome, The Webby Awards, Pratt, The Huffington Post and Dribbble to support the event. In August 2015, Universal Studios partnered with Giphy to release six GIFs promoting the new N.W.A-based movie, Straight Outta Compton. Giphy has partnered with over 200 companies and brands to host all their existing content on their own branded channel. Giphy's partners include Disney, Calvin Klein, GE, and Pepsi. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-Munro214-179] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.