text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-26] | [TOKENS: 11349]
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_Israel#Babylonian_diaspora_after_587/86_BCE] | [TOKENS: 14912]
Contents History of Israel The history of Israel covers the Southern Levant region also known as Canaan, Palestine, or the Holy Land, which is the location of Israel and Palestine. From prehistory, as part of the Levantine corridor, the area witnessed waves of early humans from Africa, then the emergence of Natufian culture c. 10,000 BCE. The region entered the Bronze Age c. 2,000 BCE with the development of Canaanite civilization. In the Iron Age, the kingdoms of Israel and Judah were established, entities central to the origins of the Abrahamic religions. This has given rise to Judaism, Samaritanism, Christianity, Islam, Druzism, Baha'ism. The Land of Israel has seen many conflicts, been controlled by various polities, and hosted various ethnic groups. In the following centuries, the Assyrian, Babylonian, Achaemenid, and Macedonian empires conquered the region. Ptolemies and Seleucids vied for control during the Hellenistic period. Through the Hasmonean dynasty, the Jews maintained independence for a century before incorporation into the Roman Republic. As a result of the Jewish–Roman wars in the 1st and 2nd centuries CE, many Jews were killed, or sold into slavery. Following the advent of Christianity, demographics shifted towards newfound Christians, who replaced Jews as the majority by the 4th century. In the 7th century, Byzantine Christian rule over Israel was superseded in the Muslim conquest of the Levant by the Rashidun Caliphate, to later be ruled by the Umayyad, Abbasid, and Fatimid caliphates, before being conquered by the Seljuks in the 1070s. Throughout the 12th and 13th centuries, the Land of Israel saw wars between Christians and Muslims as part of the Crusades, with the Kingdom of Jerusalem overrun by Saladin's Ayyubids in the 12th century. The Crusaders hung on to decreasing territories for another century. In the 13th century, the Land of Israel became subject to Mongol conquest, though this was stopped by the Mamluk Sultanate, under whose rule it remained until the 16th century. The Mamluks were defeated by the Ottoman Empire, and the region became an Ottoman province until the early 20th century. The 19th century saw the rise of a Jewish nationalist movement in Europe known as Zionism; aliyah, Jewish immigration to Israel from the diaspora, increased. During World War I, the Sinai and Palestine campaign of the Allies led to the partition of the Ottoman Empire. Britain was granted control of the region by a League of Nations mandate, known as Mandatory Palestine. The British committed to the creation of a Jewish homeland in the 1917 Balfour Declaration. Palestinian Arabs sought to prevent Jewish immigration, and tensions grew during British administration. In 1947, the UN voted for the partition of Mandate Palestine and creation of a Jewish and Arab state. The Jews accepted the plan, while the Arabs rejected it. A civil war ensued, won by the Jews. In May 1948, the Israeli Declaration of Independence sparked the 1948 War in which Israel repelled the armies of the neighbouring states. It resulted in the 1948 Palestinian expulsion and flight and led to Jewish emigration from other parts of the Middle East. About 40% of the global Jewish population resides in Israel. In 1979, the Egypt–Israel peace treaty was signed. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite a long-running Israeli–Palestinian peace process, the conflict continues. Prehistory The oldest evidence of early humans in the territory of modern Israel, dating to 1.5 million years ago, was found in Ubeidiya near the Sea of Galilee. Flint tool artefacts have been discovered at Yiron, the oldest stone tools found anywhere outside Africa.[dubious – discuss] The Daughters of Jacob Bridge over the Jordan River provides evidence of the control of fire by early humans around 780,000 years ago, one of the oldest known examples. In the Mount Carmel area at el-Tabun, and Es Skhul, Neanderthal and early modern human remains were found, showing the longest stratigraphic record in the region, spanning 600,000 years of human activity, from the Lower Paleolithic to the present day, representing roughly a million years of human evolution. Other significant Paleolithic sites include Qesem cave. A 200,000-year-old fossil from Misliya Cave is the second-oldest evidence of anatomically modern humans found outside Africa. Other notable finds include the Skhul and Qafzeh hominins, as well as Manot 1. Around 10th millennium BCE, the Natufian culture existed in the area. The beginning of agriculture in the region during the Neolithic Revolution is evidenced by sites such as Nahal Oren and Gesher. Here is one of the more common periodisations. Bronze Age Canaan The Canaanites are archaeologically attested in the Middle Bronze Age (2100–1550 BCE). There were probably independent or semi-independent city-states. Cities were often surrounded by massive earthworks, resulting in the archaeological mounds, or 'tells' common in the region today. In the late Middle Bronze Age, the Nile Delta in Egypt was settled by Canaanites who maintained close connections with Canaan. During that period, the Hyksos, dynasties of Canaanite/Asiatic origin, ruled much of Lower Egypt before being overthrown in the 16th century BCE. During the Late Bronze Age (1550–1200 BCE), there were Canaanite vassal states paying tribute to the New Kingdom of Egypt, which governed from Gaza. In 1457 BCE, Egyptian forces under the command of Pharaoh Thutmose III defeated a rebellious coalition of Canaanite vassal states led by Kadesh's king at the Battle of Megiddo. In the Late Bronze Age there was a period of civilizational collapse in the Middle East, Canaan fell into chaos, and Egyptian control ended. There is evidence that urban centers such as Hazor, Beit She'an, Megiddo, Ekron, Isdud and Ascalon were damaged or destroyed. Two groups appear at this time, and are associated with the transition to the Iron Age (they used iron weapons/tools which were better than earlier bronze): the Sea Peoples, particularly the Philistines, who migrated from the Aegean world and settled on the southern coast, and the Israelites, whose settlements dotted the highlands. Some 2nd millennium inscriptions about the semi-nomadic Habiru people are believed to be connected to the Hebrews, who were generally synonymous with the Biblical Israelites. Many scholars regard this connection to be plausible since the two ethnonyms have similar etymologies, although others argue that Habiru refers to a social class found in every Near Eastern society, including Hebrew societies. Ancient Israel and Judah: Iron Age to Babylonian period The earliest recorded evidence of a people by the name of Israel (as ysrỉꜣr) occurs in the Egyptian Merneptah Stele, erected for Pharaoh Merneptah c. 1209 BCE. Archeological evidence indicates that during the early Iron Age I, hundreds of small villages were established on the highlands of Canaan on both sides of the Jordan River, primarily in Samaria, north of Jerusalem. These villages had populations of up to 400, were largely self-sufficient and lived from herding, grain cultivation, and growing vines and olives with some economic interchange. The pottery was plain and undecorated. Writing was known and available for recording, even in small sites. William G. Dever sees this "Israel" in the central highlands as a cultural and probably political entity, more an ethnic group rather than an organized state. Modern scholars believe that the Israelites and their culture branched out of the Canaanite peoples and their cultures through the development of a distinct monolatristic—and later monotheistic—religion centred on a national god Yahweh. According to McNutt, "It is probably safe to assume that sometime during Iron Age I a population began to identify itself as 'Israelite'", differentiating itself from the Canaanites through such markers as the prohibition of intermarriage, an emphasis on family history and genealogy, and religion. Philistine cooking tools and the prevalence of pork in their diets, and locally made Mycenaean pottery—which later evolved into bichrome Philistine pottery—all support their foreign origin. Their cities were large and elaborate, which—together with the findings—point to a complex, hierarchical society. Israel Finkelstein believes that the oldest Abraham traditions originated in the Iron Age, which focus on the themes of land and offspring and possibly, his altars in Hebron. Abraham's Mesopotamian heritage is not discussed. In the 10th century BCE, the Israelite kingdoms of Judah and Israel emerged. The Hebrew Bible states that these were preceded by a single kingdom ruled by Saul, David and Solomon, who is said to have built the First Temple. Archaeologists have debated whether the united monarchy ever existed,[Notes 1] with those in favor of such a polity existing further divided between maximalists who support the Biblical accounts, and minimalists who argue that any such polity was likely smaller than suggested. Historians and archaeologists agree that the northern Kingdom of Israel existed by ca. 900 BCE and the Kingdom of Judah existed by ca. 850 BCE. The Kingdom of Israel was the more prosperous of the two kingdoms and soon developed into a regional power; during the days of the Omride dynasty, it controlled Samaria, Galilee, the upper Jordan Valley, the Sharon and large parts of the Transjordan. Samaria, the capital, was home to one of the largest Iron Age structures in the Levant. The Kingdom of Israel's capital moved between Shechem, Penuel and Tirzah before Omri settled it in Samaria, and the royal succession was often settled by a military coup d'état. The Kingdom of Judah was smaller but more stable; the Davidic dynasty ruled the kingdom for the four centuries of its existence, with the capital always in Jerusalem, controlling the Judaean Mountains, most of the Shephelah and the Beersheba valley in the northern Negev. In 854 BCE, according to the Kurkh Monoliths, an alliance between Ahab of Israel and Ben Hadad II of Aram-Damascus managed to repulse the incursions of the Assyrians, with a victory at the Battle of Qarqar. Another important discovery of the period is the Mesha Stele, a Moabite stele found in Dhiban when Emir Sattam Al-Fayez led Henry Tristram to it as they toured the lands of the vassals of the Bani Sakher. The stele is now in the Louvre. In the stele, Mesha, king of Moab, tells how Chemosh, the god of Moab, had been angry with his people and had allowed them to be subjugated to the Kingdom of Israel, but at length, Chemosh returned and assisted Mesha to throw off the yoke of Israel and restore the lands of Moab. It refers to Omri, king of Israel, to the god Yahweh, and may contain another early reference to the House of David. The Kingdom of Israel fell to the Assyrians following a long siege of the capital Samaria around 720 BCE. The records of Sargon II indicate that he captured Samaria and deported 27,290 inhabitants to Mesopotamia. It is likely that Shalmaneser captured the city since both the Babylonian Chronicles and the Hebrew Bible viewed the fall of Israel as the signature event of his reign. The Assyrian deportations became the basis for the Jewish idea of the Ten Lost Tribes. Foreign groups were settled by the Assyrians in the territories of the fallen kingdom. The Samaritans claim to be descended from Israelites of ancient Samaria who were not expelled by the Assyrians. It is believed that refugees from the destruction of Israel moved to Judah, massively expanding Jerusalem and leading to construction of the Siloam Tunnel during the rule of King Hezekiah (ruled 715–686 BCE). The Siloam inscription, a plaque written in Hebrew left by the construction team, was discovered in the tunnel in 1880s, and is today held by the Istanbul Archaeology Museum. During Hezekiah's rule, Sennacherib, the son of Sargon, attempted but failed to capture Judah. Assyrian records say that Sennacherib levelled 46 walled cities and besieged Jerusalem, leaving after receiving extensive tribute. Sennacherib erected the Lachish reliefs in Nineveh to commemorate a second victory at Lachish. The writings of four different "prophets" are believed to date from this period: Hosea and Amos in Israel and Micah and Isaiah of Judah. These men were mostly social critics who warned of the Assyrian threat and acted as religious spokesmen. They exercised some form of free speech and may have played a significant social and political role in Israel and Judah. They urged rulers and the general populace to adhere to god-conscious ethical ideals, seeing the Assyrian invasions as a divine punishment of the collective resulting from ethical failures. Under King Josiah (ruler from 641 to 619 BCE), the Book of Deuteronomy was either rediscovered or written. The Book of Joshua and the accounts of the kingship of David and Solomon in the Book of Kings are believed to have the same author. The books are known as Deuteronomist and considered to be a key step in the emergence of monotheism in Judah. They emerged at a time that Assyria was weakened by the emergence of Babylon and may be a committing to text of pre-writing verbal traditions. During the late 7th century BCE, Judah became a vassal state of the Neo-Babylonian Empire. In 601 BCE, Jehoiakim of Judah allied with Babylon's principal rival, Egypt, despite the strong remonstrances of the prophet Jeremiah. As a punishment, the Babylonians besieged Jerusalem in 597 BCE, and the city surrendered. The defeat was recorded by the Babylonians. Nebuchadnezzar pillaged Jerusalem and deported king Jechoiachin (Jeconiah), along with other prominent citizens, to Babylon; Zedekiah, his uncle, was installed as king. A few years later, Zedekiah launched another revolt against Babylon, and an army was sent to conquer Jerusalem. In 587 or 586 BCE, King Nebuchadnezzar II of Babylon conquered Jerusalem, destroyed the First Temple and razed the city. The Kingdom of Judah was abolished, and many of its citizens were exiled to Babylon. The former territory of Judah became a Babylonian province called Yehud with its center in Mizpah, north of the destroyed Jerusalem. Tablets that describe King Jehoiachin's rations were found in the ruins of Babylon. He was eventually released by the Babylonians. According to both the Bible and the Talmud, the Davidic dynasty continued as head of Babylonian Jewry, called the "Rosh Galut" (exilarch or head of exile). Arab and Jewish sources show that the Rosh Galut continued to exist for another 1,500 years in what is now Iraq, ending in the eleventh century. Second Temple period In 538 BCE, Cyrus the Great of the Achaemenid Empire conquered Babylon and took over its empire. Cyrus issued a proclamation granting religious freedom to all peoples subjugated by the Babylonians (see the Cyrus Cylinder). According to the Bible, Jewish exiles in Babylon, including 50,000 Judeans led by Zerubabel, returned to Judah to rebuild the Temple in Jerusalem. The Second Temple was subsequently completed c. 515 BCE. A second group of 5,000, led by Ezra and Nehemiah, returned to Judah in 456 BCE. The first was empowered by the Persian king to enforce religious rules, the second had the status of governor and a royal mission to restore the walls of the city. The country remained a province of the Achaemenid empire called Yehud until 332 BCE. The final text of the Torah is thought to have been written during the Persian period (probably 450–350 BCE). The text was formed by editing and unifying earlier texts. The returning Israelites adopted an Aramaic script (also known as the Ashuri alphabet), which they brought back from Babylon; this is the current Hebrew script. The Hebrew calendar closely resembles the Babylonian calendar and probably dates from this period. The Bible describes tension between the returnees, the elite of the First Temple period, and those who had remained in Judah. It is possible that the returnees, supported by the Persian monarchy, became large landholders at the expense of the people who had remained to work the land in Judah, whose opposition to the Second Temple would have reflected a fear that exclusion from the cult would deprive them of land rights. Judah had become in practice a theocracy, ruled by hereditary High Priests and a Persian-appointed governor, frequently Jewish, charged with keeping order and seeing that tribute was paid. A Judean military garrison was placed by the Persians on Elephantine Island near Aswan in Egypt. In the early 20th century, 175 papyrus documents recording activity in this community were discovered, including the "Passover Papyrus", a letter instructing the garrison on how to correctly conduct the Passover feast. In 332 BCE, Alexander the Great of Macedon conquered the region as part of his campaign against the Achaemenid Empire. After his death in 322 BCE, his generals divided the empire and Judea became a frontier region between the Seleucid Empire and Ptolemaic Kingdom in Egypt. Following a century of Ptolemaic rule, Judea was conquered by the Seleucid Empire in 200 BCE at the battle of Panium. Hellenistic rulers generally respected Jewish culture and protected Jewish institutions. Judea was ruled by the hereditary office of the High Priest of Israel as a Hellenistic vassal. Nevertheless, the region underwent a process of Hellenization, which heightened tensions between Greeks, Hellenized Jews, and observant Jews. These tensions escalated into clashes involving a power struggle for the position of high priest and the character of the holy city of Jerusalem. When Antiochus IV Epiphanes consecrated the temple, forbade Jewish practices, and forcibly imposed Hellenistic norms on the Jews, several centuries of religious tolerance under Hellenistic control came to an end. In 167 BCE, the Maccabean revolt erupted after Mattathias, a Jewish priest of the Hasmonean lineage, killed a Hellenized Jew and a Seleucid official who participated in sacrifice to the Greek gods in Modi'in. His son Judas Maccabeus defeated the Seleucids in several battles, and in 164 BCE, he captured Jerusalem and restored temple worship, an event commemorated by the Jewish festival of Hannukah. After Judas' death, his brothers Jonathan Apphus and Simon Thassi were able to establish and consolidate a vassal Hasmonean state in Judea, capitalizing on the Seleucid Empire's decline as a result of internal instability and wars with the Parthians, and by forging ties with the rising Roman Republic. Hasmonean leader John Hyrcanus was able to gain independence, doubling Judea's territories. He took control of Idumaea, where he converted the Edomites to Judaism, and invaded Scythopolis and Samaria, where he demolished the Samaritan Temple. Hyrcanus was also the first Hasmonean leader to mint coins. Under his sons, kings Aristobulus I and Alexander Jannaeus, Hasmonean Judea became a kingdom, and its territories continued to expand, now also covering the coastal plain, Galilee and parts of the Transjordan. Some scholars argue that the Hasmonean dynasty also institutionalized the final Jewish biblical canon. Under Hasmonean rule, the Pharisees, Sadducees and the mystic Essenes emerged as the principal Jewish social movements. The Pharisee sage Simeon ben Shetach is credited with establishing the first schools based around meeting houses. This was a key step in the emergence of Rabbinical Judaism. After Jannaeus' widow, queen Salome Alexandra, died in 67 BCE, her sons Hyrcanus II and Aristobulus II engaged in a civil war over succession. The conflicting parties requested Pompey's assistance on their behalf, which paved the way for a Roman takeover of the kingdom. In 63 BCE, the Roman Republic conquered Judaea, ending Jewish independence under the Hasmoneans. Roman general Pompey intervened in a dynastic civil war and, after capturing Jerusalem, reinstated Hyrcanus II as high priest but denied him the title of king. Rome soon installed the Herodian dynasty—of Idumean descent but Jewish by conversion—as a loyal replacement for the nationalist Hasmoneans. In 37 BCE, Herod the Great, the first client king of this line, took power after defeating the restored Hasmonean king Antigonus II Mattathias. Herod imposed heavy taxes, suppressed opposition, and centralized authority, which fostered widespread resentment. Herod also carried out major monumental construction projects throughout his kingdom, and significantly expanded the Second Temple, which he transformed into one of the largest religious structures in the ancient world. After his death in 4 BCE, his kingdom was divided among his sons into a tetrarchy under continued Roman oversight. In 6 CE, Roman emperor Augustus transformed Judaea into a Roman province, deposing its last Jewish ruler, Herod Archelaus, and appointing a Roman governor in his place. That same year, a census triggered a small uprising by Judas of Galilee, the founder of a movement that rejected foreign authority and recognized only God as king. Over the next six decades, with the brief exception of a short period of Jewish autonomy under the client king Herod Agrippa I, the province remained under direct Roman administration. Some governors ruled with brutality and showed little regard for Jewish religious sensitivities, deepening resentment among the local population. This discontent was also fueled by poor governance, corruption, and growing economic inequality, along with rising tensions between Jews and neighboring populations over ethnic, religious, and territorial disputes. At the same time, collective memory of the Maccabean revolt and the period of Hasmonean independence continued to inspire hopes for national liberation from Roman control. In 64 CE, the Temple High Priest Joshua ben Gamla introduced a religious requirement for Jewish boys to learn to read from the age of six. Over the next few hundred years this requirement became steadily more ingrained in Jewish tradition. The Jewish–Roman wars were a series of large-scale revolts by Jewish subjects against the Roman Empire between 66 and 135 CE. The term primarily applies to the First Jewish–Roman War (66–73 CE) and the Bar Kokhba revolt (132–136 CE), both nationalist rebellions aimed at restoring Jewish independence in Judea. Some sources also include the Diaspora Revolt (115–117 CE), an ethno-religious conflict fought across the Eastern Mediterranean and including the Kitos War in Judaea. The Jewish–Roman wars had a devastating impact on the Jewish people, transforming them from a major population in the Eastern Mediterranean into a dispersed and persecuted minority. The First Jewish-Roman War culminated in the destruction of Jerusalem and other towns and villages in Judaea, resulting in significant loss of life and a considerable segment of the population being uprooted or displaced. Those who remained were stripped of any form of political autonomy. Subsequently, the brutal suppression of the Bar Kokhba revolt resulted in even more severe consequences. Judea witnessed a significant depopulation, as many Jews were killed, expelled, or sold into slavery. The outcome of the conflict marked the termination of efforts to reestablish a Jewish state until the modern era. Jews were banned from residing in the vicinity of Jerusalem, which the Romans rebuilt into the pagan colony of Aelia Capitolina, and the province of Judaea was renamed Syria Palaestina. Collectively, these events enhanced the role of Jewish diaspora, relocating the Jewish demographic and cultural center to Galilee and eventually to Babylonia, with smaller communities across the Mediterranean, the Middle East, and beyond. The Jewish–Roman wars also had a major impact on Judaism, after the central worship site of Second Temple Judaism, the Second Temple in Jerusalem, was destroyed by Titus's troops in 70 CE. The destruction of the Temple led to a transformation in Jewish religious practices, emphasizing prayer, Torah study, and communal gatherings in synagogues. This pivotal shift laid the foundation for the emergence of Rabbinic Judaism, which has been the dominant form of Judaism since late antiquity, after the codification of the Babylonian Talmud. Late Roman and Byzantine periods As a result of the disastrous effects of the Bar Kokhba revolt, Jewish presence in the region significantly dwindled. Over the next centuries, more Jews left to communities in the Diaspora, especially the large, speedily growing Jewish communities in Babylonia and Arabia. Others remained in the Land of Israel, where the spiritual and demographic center shifted from the depopulated Judea to Galilee. Jewish presence also continued in the southern Hebron Hills, in Ein Gedi, and on the coastal plain. The Mishnah and the Jerusalem Talmud, huge compendiums of Rabbinical discussions, were compiled during the 2nd to 4th centuries CE in Tiberias and Jerusalem. Following the revolt, Judea's countryside was penetrated by pagan populations, including migrants from the nearby provinces of Syria, Phoenicia, and Arabia, whereas Aelia Capitolina, its immediate vicinity, and administrative centers were now inhabited by Roman veterans and settlers from the western parts of the empire. The Romans permitted a hereditary Rabbinical Patriarch from the House of Hillel, called the "Nasi", to represent the Jews in dealings with the Romans. One prominent figure was Judah ha-Nasi, credited with compiling the final version of the Mishnah, a vast collection of Jewish oral traditions. He also emphasized the importance of education in Judaism, leading to requirements that illiterate Jews be treated as outcasts. This might have contributed to some illiterate Jews converting to Christianity. Jewish seminaries, such as those at Shefaram and Bet Shearim, continued to produce scholars. The best of these became members of the Sanhedrin, which was located first at Sepphoris and later at Tiberias. In the Galillee, many synagogues have been found dating from this period, and the burial site of the Sanhedrin leaders was discovered in Beit She'arim. In the 3rd century, the Roman Empire faced an economic crisis and imposed heavy taxation to fund wars of imperial succession. This situation prompted additional Jewish migration from Syria Palaestina to the Sasanian Empire, known for its more tolerant environment; there, a flourishing Jewish community with important Talmudic academies thrived in Babylonia, engaging in a notable rivalry with the Talmudic academies of Palaestina. Early in the 4th century, the Emperor Constantine made Constantinople the capital of the East Roman Empire and made Christianity an accepted religion. His mother Helena made a pilgrimage to Jerusalem (326–328) and led the construction of the Church of the Nativity (birthplace of Jesus in Bethlehem), the Church of the Holy Sepulchre (burial site of Jesus in Jerusalem) and other key churches that still exist. The name Jerusalem was restored to Aelia Capitolina and became a Christian city. Jews were still banned from living in Jerusalem, but were allowed to visit and worship at the site of the ruined temple. Over the course of the next century Christians worked to eradicate "paganism", leading to the destruction of classical Roman traditions and eradication of their temples. In 351–2, another Jewish revolt in the Galilee erupted against a corrupt Roman governor. The Roman Empire split in 390 CE and the region became part of the Eastern Roman Empire, known as the Byzantine Empire. Under Byzantine rule, much of the region and its non-Jewish population were won over by Christianity, which eventually became the dominant religion in the region. The presence of holy sites drew Christian pilgrims, some of whom chose to settle, contributing to the rise of a Christian majority. Christian authorities encouraged this pilgrimage movement and appropriated lands, constructing magnificent churches at locations linked to biblical narratives. Additionally, monks established monasteries near pagan settlements, encouraging the conversion of local pagans. During the Byzantine period, the Jewish presence in the region declined, and it is believed that Jews lost their majority status in Palestine in the fourth century. While Judaism remained the sole non-Christian religion tolerated, restrictions on Jews gradually increased, prohibiting the construction of new places of worship, holding public office, or owning Christian slaves. In 425, after the death of the last Nasi, Gamliel VI, the Nasi office and the Sanhedrin were officially abolished, and the standing of yeshivot weakened. The leadership void was gradually filled by the Jewish center in Babylonia, which would assume a leading role in the Jewish world for generations after the Byzantine period. During the 5th and 6th centuries CE, the region witnessed a series of Samaritan revolts against Byzantine rule. Their suppression resulted in the decline of Samaritan presence and influence, and further consolidated Christian domination. Though it is acknowledged that some Jews and Samaritans converted to Christianity during the Byzantine period, the reliable historical records are limited, and they pertain to individual conversions rather than entire communities. In 611, Khosrow II, ruler of Sassanid Persia, invaded the Byzantine Empire. He was helped by Jewish fighters recruited by Benjamin of Tiberias and captured Jerusalem in 614. The "True Cross" was captured by the Persians. The Jewish Himyarite Kingdom in Yemen may also have provided support. Nehemiah ben Hushiel was made governor of Jerusalem. Christian historians of the period claimed the Jews massacred Christians in the city, but there is no archeological evidence of destruction, leading modern historians to question their accounts. In 628, Kavad II (son of Kosrow) returned Palestine and the True Cross to the Byzantines and signed a peace treaty with them. Following the Byzantine re-entry, Heraclius massacred the Jewish population of Galilee and Jerusalem, while renewing the ban on Jews entering the latter. Early Muslim period The Levant was conquered by an Arab army under the command of ʿUmar ibn al-Khaṭṭāb in 635, and became the province of Bilad al-Sham of the Rashidun Caliphate. Two military districts—Jund Filastin and Jund al-Urdunn—were established in Palestine. A new city called Ramlah was built as the Muslim capital of Jund Filastin, while Tiberias served as the capital of Jund al-Urdunn. The Byzantine ban on Jews living in Jerusalem came to an end. In 661, Mu'awiya I was crowned Caliph in Jerusalem, becoming the first of the (Damascus-based) Umayyad dynasty. In 691, Umayyad Caliph Abd al-Malik (685–705) constructed the Dome of the Rock shrine on the Temple Mount, where the two Jewish temples had been located. A second building, the Al-Aqsa Mosque, was also erected on the Temple Mount in 705. Both buildings were rebuilt in the 10th century following a series of earthquakes. In 750, Arab discrimination against non-Arab Muslims led to the Abbasid Revolution and the Umayyads were replaced by the Abbasid Caliphs who built a new city, Baghdad, to be their capital. This period is known as the Islamic Golden Age, the Arab Empire was the largest in the world and Baghdad the largest and richest city. Both Arabs and minorities prospered across the region and much scientific progress was made. There were however setbacks: During the 8th century, the Caliph Umar II introduced a law requiring Jews and Christians to wear identifying clothing. Jews were required to wear yellow stars round their neck and on their hats, Christians had to wear Blue. Clothing regulations arose during repressive periods of Arab rule and were more designed to humiliate then persecute non-Muslims. A poll tax was imposed on all non-Muslims by Islamic rulers and failure to pay could result in imprisonment or worse. In 982, Caliph Al-Aziz Billah of the Cairo-based Fatimid dynasty conquered the region. The Fatimids were followers of Isma'ilism, a branch of Shia Islam and claimed descent from Fatima, Mohammed's daughter. Around the year 1010, the Church of Holy Sepulchre (believed to be Jesus burial site), was destroyed by Fatimid Caliph al-Hakim, who relented ten years later and paid for it to be rebuilt. In 1020 al-Hakim claimed divine status and the newly formed Druze religion gave him the status of a messiah. Although the Arab conquest was relatively peaceful and did not cause widespread destruction, it did alter the country's demographics significantly. Over the ensuing several centuries, the region experienced a drastic decline in its population, from an estimated 1 million during Roman and Byzantine times to some 300,000 by the early Ottoman period. This demographic collapse was accompanied by a slow process of Islamization, that resulted from the flight of non-Muslim populations, immigration of Muslims, and local conversion. The majority of the remaining populace belonged to the lowest classes. While the Arab conquerors themselves left the area after the conquest and moved on to other places, the settlement of Arab tribes in the area both before and after the conquest also contributed to the Islamization. As a result, the Muslim population steadily grew and the area became gradually dominated by Muslims on a political and social level. During the early Islamic period, many Christians and Samaritans, belonging to the Byzantine upper class, migrated from the coastal cities to northern Syria and Cyprus, which were still under Byzantine control, while others fled to the central highlands and the Transjordan. As a result, the coastal towns, formerly important economic centers connected with the rest of the Byzantine world, were emptied of most of their residents. Some of these cities—namely Ashkelon, Acre, Arsuf, and Gaza—now fortified border towns, were resettled by Muslim populations, who developed them into significant Muslim centers. The region of Samaria also underwent a process of Islamization as a result of waves of conversion among the Samaritan population and the influx of Muslims into the area. The predominantly Jacobite Monophysitic Christian population had been hostile to Byzantium orthodoxy, and at times for that reason welcomed Muslim rule. There is no strong evidence for forced conversion, or that the jizya tax significantly affected such changes. The demographic situation in Palestine was further altered by urban decline under the Abbasids, and it is thought that the 749 earthquake hastened this process by causing an increase in the number of Jews, Christians, and Samaritans who emigrated to diaspora communities while also leaving behind others who remained in the devastated cities and poor villages until they converted to Islam. Historical records and archeological evidence suggest that many Samaritans converted under Abbasid and Tulunid rule, after suffering through severe difficulties such droughts, earthquakes, religious persecution, heavy taxes and anarchy. The same region also saw the settlement of Arabs. Over the period, the Samaritan population drastically decreased, with the rural Samaritan population converting to Islam, and small urban communities remaining in Nablus and Caesarea, as well as in Cairo, Damascus, Aleppo and Sarepta. Nevertheless, the Muslim population remained a minority in a predominantly Christian area, and it is likely that this status persisted until the Crusader period. Crusades and Mongols In 1095, Pope Urban II called upon Christians to wage a holy war and recapture Jerusalem from Muslim rule. Responding to this call, Christians launched the First Crusade in the same year, a military campaign aimed at retaking the Holy Land, ultimately resulting in the successful siege and conquest of Jerusalem in 1099. In the same year, the Crusaders conquered Beit She'an and Tiberias, and in the following decade, they captured coastal cities with the support of Italian city-state fleets, establishing these coastal ports as crucial strongholds for Crusader rule in the region. Following the First Crusade, several Crusader states were established in the Levant, with the Kingdom of Jerusalem (Regnum Hierosolymitanum) assuming a preeminent position and enjoying special status among them. The population consisted predominantly of Muslims, Christians, Jews, and Samaritans, while the Crusaders remained a minority and relied on the local population who worked the soil. The region saw the construction of numerous robust castles and fortresses, yet efforts to establish permanent European villages proved unsuccessful. Around 1180, Raynald of Châtillon, ruler of Transjordan, caused increasing conflict with the Ayyubid Sultan Saladin (Salah-al-Din), leading to the defeat of the Crusaders in the 1187 Battle of Hattin (above Tiberias). Saladin was able to peacefully take Jerusalem and conquered most of the former Kingdom of Jerusalem. Saladin's court physician was Maimonides, a refugee from Almohad (Muslim) persecution in Córdoba, Spain, where all non-Muslim religions had been banned. The Christian world's response to the loss of Jerusalem came in the Third Crusade of 1190. After lengthy battles and negotiations, Richard the Lionheart and Saladin concluded the Treaty of Jaffa in 1192 whereby Christians were granted free passage to make pilgrimages to the holy sites, while Jerusalem remained under Muslim rule. In 1229, Jerusalem peacefully reverted into Christian control as part of a treaty between Holy Roman Emperor Frederick II and Ayyubid sultan al-Kamil that ended the Sixth Crusade. In 1244, Jerusalem was sacked by the Khwarezmian Tatars who decimated the city's Christian population, drove out the Jews and razed the city. The Khwarezmians were driven out by the Ayyubids in 1247. Mamluk period Between 1258 and 1291, the area was the frontier between Mongol invaders (occasional Crusader allies) and the Mamluks of Egypt. The conflict impoverished the country and severely reduced the population. In Egypt a caste of warrior slaves, known as the Mamluks, gradually took control of the kingdom. The Mamluks were mostly of Turkish origin, and were bought as children and then trained in warfare. They were highly prized warriors, who gave rulers independence of the native aristocracy. In Egypt they took control of the kingdom following a failed invasion by the Crusaders (Seventh Crusade). The first Mamluk Sultan, Qutuz of Egypt, defeated the Mongols in the Battle of Ain Jalut ("Goliath's spring" near Ein Harod), ending the Mongol advances. He was assassinated by one of his Generals, Baibars, who went on to eliminate most of the Crusader outposts. The Mamluks ruled Palestine until 1516, regarding it as part of Syria. In Hebron, Jews were banned from worshipping at the Cave of the Patriarchs (the second-holiest site in Judaism); they were only allowed to enter 7 steps inside the site and the ban remained in place until Israel assumed control of the West Bank in the Six-Day War.[undue weight? – discuss] The Egyptian Mamluk sultan Al-Ashraf Khalil conquered the last outpost of Crusader rule in 1291. The Mamluks, continuing the policy of the Ayyubids, made the strategic decision to destroy the coastal area and to bring desolation to many of its cities, from Tyre in the north to Gaza in the south. Ports were destroyed and various materials were dumped to make them inoperable. The goal was to prevent attacks from the sea, given the fear of the return of the Crusaders. This had a long-term effect on those areas, which remained sparsely populated for centuries. The activity in that time concentrated more inland. With the 1492 expulsion of Jews from Spain and 1497 persecution of Jews and Muslims by Manuel I of Portugal, many Jews moved eastward, with some deciding to settle in the Mamluk Palestine. As a consequence, the local Jewish community underwent significant rejuvenation. The influx of Sephardic Jews began under Mamluk rule in the 15th century, and continued throughout the 16th century and especially after the Ottoman conquest. As city-dwellers, the majority of Sephardic Jews preferred to settle in urban areas, mainly in Safed but also in Jerusalem, while the Musta'arbi community comprised the majority of the villagers' Jews. Ottoman period Under the Mamluks, the area was a province of Bilad a-Sham (Syria). It was conquered by Turkish Sultan Selim I in 1516–17, becoming a part of the province of Ottoman Syria for the next four centuries, first as the Damascus Eyalet and later as the Syria Vilayet (following the Tanzimat reorganization of 1864). With the more favorable conditions that followed the Ottoman conquest, the immigration of Jews fleeing Catholic Europe, which had already begun under Mamluk rule, continued, and soon an influx of exiled Sephardic Jews came to dominate the Jewish community in the area. In 1558, Selim II (1566–1574), successor to Suleiman, whose wife Nurbanu Sultan was Jewish, gave control of Tiberias to Doña Gracia Mendes Nasi, one of the richest women in Europe and an escapee from the Inquisition. She encouraged Jewish refugees to settle in the area and established a Hebrew printing press. Safed became a centre for study of the Kabbalah and other Jewish religious studies, culminating with Joseph Karo's writing of the Shulchan Aruch – published in 1565 in Venice – which became the near-universal standard of Jewish religious law. Doña Nasi's nephew, Joseph Nasi, was made governor of Tiberias and he encouraged Jewish settlement from Italy. In 1660, a Druze power struggle led to the destruction of Safed and Tiberias. In the late 18th century a local Arab sheikh, Zahir al-Umar, created a de facto independent Emirate in the Galilee. Ottoman attempts to subdue the Sheikh failed, but after Zahir's death the Ottomans restored their rule in the area. In 1799, Napoleon briefly occupied the country and planned a proclamation inviting Jews to create a state. The proclamation was shelved following his defeat at Acre. In 1831, Muhammad Ali of Egypt, an Ottoman ruler who left the Empire and tried to modernize Egypt, conquered Ottoman Syria and imposed conscription, leading to the Arab revolt. In 1838, there was another Druze revolt. In 1839 Moses Montefiore met with Muhammed Pasha in Egypt and signed an agreement to establish 100–200 Jewish villages in the Damascus Eyalet of Ottoman Syria, but in 1840 the Egyptians withdrew before the deal was implemented, returning the area to Ottoman governorship. In 1844, Jews constituted the largest population group in Jerusalem. By 1896 Jews constituted an absolute majority in Jerusalem, but the overall population in Palestine was 88% Muslim and 9% Christian. Between 1882 and 1903, approximately 35,000 Jews moved to Palestine, known as the First Aliyah. In the Russian Empire, Jews faced growing persecution and legal restrictions. Half the world's Jews lived in the Russian Empire, where they were restricted to living in the Pale of Settlement. Severe pogroms in the early 1880s and legal repression led to 2 million Jews emigrating from the Russian Empire. 1.5 million went to the United States. Popular destinations were also Germany, France, the United Kingdom, the Netherlands, Argentina and Palestine. The Zionist movement began in earnest in 1882 with Leon Pinsker's pamphlet Auto-Emancipation, which argued for the creation of a Jewish national homeland as a means to avoid the violence plaguing Jewish communities in Eastern Europe. At the 1884 Katowice Conference, Russian Jews established the Bilu and Hovevei Zion ("Lovers of Zion") movements with the aim of settling in Palestine. In 1878, Russian Jewish emigrants established the village of Petah Tikva ("The Beginning of Hope"), followed by Rishon LeZion ("First to Zion") in 1882. The existing Ashkenazi communities were concentrated in the Four Holy Cities, extremely poor and relied on donations (halukka) from groups abroad, while the new settlements were small farming communities, but still relied on funding by the French Baron, Edmond James de Rothschild, who sought to establish profitable enterprises. Many early migrants could not find work and left, but despite the problems, more settlements arose and the community grew. After the Ottoman conquest of Yemen in 1881, a large number of Yemenite Jews also emigrated to Palestine, often driven by Messianism. In 1896 Theodor Herzl published Der Judenstaat (The Jewish State), in which he asserted that the solution to growing antisemitism in Europe (the so-called "Jewish Question") was to establish a Jewish state. In 1897, the World Zionist Organization was founded and the First Zionist Congress proclaimed its aim "to establish a home for the Jewish people in Palestine secured under public law." The Congress chose Hatikvah ("The Hope") as its anthem. Between 1904 and 1914, around 40,000 Jews settled in the area now known as Israel (the Second Aliyah). In 1908, the World Zionist Organization set up the Palestine Bureau (also known as the "Eretz Israel Office") in Jaffa and began to adopt a systematic Jewish settlement policy. In 1909, residents of Jaffa bought land outside the city walls and built the first entirely Hebrew-speaking town, Ahuzat Bayit (later renamed Tel Aviv). In 1915–1916, Talaat Pasha of the Young Turks forced around a million Armenian Christians from their homes in Eastern Turkey, marching them south through Syria, in what is now known as the Armenian genocide. The number of dead is thought to be around 700,000. Hundreds of thousands were forcibly converted to Islam. A community of survivors settled in Jerusalem, one of whom developed the now iconic Armenian pottery. During World War I, most Jews supported the Germans because they were fighting the Russians who were regarded as the Jews' main enemy. In Britain, the government sought Jewish support for the war effort for a variety of reasons including an antisemitic perception of "Jewish power" in the Ottoman Empire's Young Turks movement which was based in Thessaloniki, the most Jewish city in Europe (40% of the 160,000 population were Jewish). The British also hoped to secure American Jewish support for US intervention on Britain's behalf. There was already sympathy for the aims of Zionism in the British government, including the Prime Minister Lloyd George. Over 14,000 Jews were expelled by the Ottoman military commander from the Jaffa area in 1914–1915, due to suspicions they were subjects of Russia, an enemy, or Zionists wishing to detach Palestine from the Ottoman Empire, and when the entire population, including Muslims, of both Jaffa and Tel Aviv was subject to an expulsion order in April 1917, the affected Jews could not return until the British conquest ended in 1918, which drove the Turks out of Southern Syria. A year prior, in 1917, the British foreign minister, Arthur Balfour, sent a public letter to the British Lord Rothschild, a leading member of his party and leader of the Jewish community. The letter subsequently became known as the Balfour Declaration. It stated that the British Government "view[ed] with favour the establishment in Palestine of a national home for the Jewish people". The declaration provided the British government with a pretext for claiming and governing the country. New Middle Eastern boundaries were decided by an agreement between British and French bureaucrats. A Jewish Legion composed largely of Zionist volunteers organized by Ze'ev Jabotinsky and Joseph Trumpeldor participated in the British invasion. It also participated in the failed Gallipoli Campaign. The Nili Zionist spy network provided the British with details of Ottoman plans and troop concentrations. The Ottoman Empire chose to ally itself with Germany when the first war began. Arab leaders dreamed of freeing themselves from Ottoman rule and establishing self-government or forming an independent Arab state. Therefore, Britain contacted Hussein bin Ali of the Kingdom of Hejaz and proposed cooperation. Together they organized the Arab revolt that Britain supplied with very large quantities of rifles and ammunition. In cooperation between British artillery and Arab infantry, the city of Aqaba on the Red Sea was conquered. The Arab army then continued north while Britain attacked the ottomans from the sea. In 1917–1918, Jerusalem and Damascus were conquered from the ottomans. Britain then broke off cooperation with the Arab army. It turned out that Britain had already entered into the secret Sykes–Picot Agreement that meant that only Britain and France would be allowed to administer the land conquered from the Ottoman Empire. After pushing out the Ottomans, Palestine came under martial law. The British, French and Arab Occupied Enemy Territory Administration governed the area shortly before the armistice with the Ottomans until the promulgation of the mandate in 1920. Mandatory Palestine The British Mandate (in effect, British rule) of Palestine, including the Balfour Declaration, was confirmed by the League of Nations in 1922 and came into effect in 1923. The territory of Transjordan was also covered by the Mandate but under separate rules that excluded it from the Balfour Declaration. Britain signed a treaty with the United States (which did not join the League of Nations) in which the United States endorsed the terms of the Mandate, which was approved unanimously by both the U.S. Senate and House of Representatives. The Balfour declaration was published on the 2nd of November 1917 and the Bolsheviks seized control of Russia a week later. This led to civil war in the Russian Empire. Between 1918 and 1921, a series of pogroms led to the death of at least 100,000 Jews (mainly in what is now Ukraine), and the displacement as refugees of a further 600,000. This led to further migration to Palestine. Between 1919 and 1923, some 40,000 Jews arrived in Palestine in what is known as the Third Aliyah. Many of the Jewish immigrants of this period were Socialist Zionists and supported the Bolsheviks. The migrants became known as pioneers (halutzim), experienced or trained in agriculture who established self-sustaining communes called kibbutzim. Malarial marshes in the Jezreel Valley and Hefer Plain were drained and converted to agricultural use. Land was bought by the Jewish National Fund, a Zionist charity that collected money abroad for that purpose. After the French victory over the Arab Kingdom of Syria ended hopes of Arab independence, there were clashes between Arabs and Jews in Jerusalem during the 1920 Nebi Musa riots and in Jaffa the following year, leading to the establishment of the Haganah underground Jewish militia. A Jewish Agency was created which issued the entry permits granted by the British and distributed funds donated by Jews abroad. Between 1924 and 1929, over 80,000 Jews arrived in the Fourth Aliyah, fleeing antisemitism and heavy tax burdens imposed on trade in Poland and Hungary, inspired by Zionism and motivated by the closure of United States borders by the Immigration Act of 1924 which severely limited immigration from Eastern and Southern Europe. Pinhas Rutenberg, a former Commissar of St Petersburg in Russia's pre-Bolshevik Kerensky Government, built the first electricity generators in Palestine. In 1925, the Jewish Agency established the Hebrew University in Jerusalem and the Technion (technological university) in Haifa. British authorities introduced the Palestine pound (worth 1000 "mils") in 1927, replacing the Egyptian pound as the unit of currency in the Mandate. From 1928, the democratically elected Va'ad Leumi (Jewish National Council or JNC) became the main administrative institution of the Palestine Jewish community (Yishuv) and included non-Zionist Jews. As the Yishuv grew, the JNC adopted more government-type functions, such as education, health care, and security. With British permission, the Va'ad Leumi raised its own taxes and ran independent services for the Jewish population. In 1929, tensions grew over the Kotel (Wailing Wall), the holiest spot in the world for modern Judaism,[citation needed] which was then a narrow alleyway where the British banned Jews from using chairs or curtains: Many of the worshippers were elderly and needed seats; they also wanted to separate women from men. The Mufti of Jerusalem said it was Muslim property and deliberately had cattle driven through the alley.[citation needed] He alleged that the Jews were seeking control of the Temple Mount. This provided the spark for the August 1929 Palestine riots. The main victims were the (non-Zionist) ancient Jewish community at Hebron, who were massacred. The riots led to right-wing Zionists establishing their own militia in 1931, the Irgun Tzvai Leumi (National Military Organization, known in Hebrew by its acronym "Etzel"), which was committed to a more aggressive policy towards the Arab population. During the interwar period, the perception grew that there was an irreconciliable tension between the two Mandatory functions, of providing for a Jewish homeland in Palestine, and the goal of preparing the country for self-determination. The British rejected the principle of majority rule or any other measure that would give the Arab population, who formed the majority of the population, control over Palestinian territory. Between 1929 and 1938, 250,000 Jews arrived in Palestine (Fifth Aliyah). In 1933, the Jewish Agency and the Nazis negotiated the Ha'avara Agreement (transfer agreement), under which 50,000 German Jews would be transferred to Palestine. The Jews' possessions were confiscated and in return the Nazis allowed the Ha'avara organization to purchase 14 million pounds worth of German goods for export to Palestine and use it to compensate the immigrants. Although many Jews wanted to leave Nazi Germany, the Nazis prevented Jews from taking any money and restricted them to two suitcases so few could pay the British entry tax.[citation needed] The agreement was controversial and the Labour Zionist leader who negotiated the agreement, Haim Arlosoroff, was assassinated in Tel Aviv in 1933. The assassination was used by the British to create tension between the Zionist left and the Zionist right.[citation needed] Arlosoroff had been the boyfriend of Magda Ritschel some years before she married Joseph Goebbels. There has been speculation that he was assassinated by the Nazis to hide the connection but there is no evidence for it. Between 1933 and 1936, 174,000 arrived despite the large sums the British demanded for immigration permits: Jews had to prove they had 1,000 pounds for families with capital (equivalent to £85,824 in 2023), 500 pounds if they had a profession and 250 pounds if they were skilled labourers.[better source needed] Jewish immigration and Nazi propaganda contributed to the large-scale 1936–1939 Arab revolt in Palestine, a largely nationalist uprising directed at ending British rule. The head of the Jewish Agency, Ben-Gurion, responded to the Arab Revolt with a policy of "Havlagah"—self-restraint and a refusal to be provoked by Arab attacks in order to prevent polarization. The Etzel group broke off from the Haganah in opposition to this policy. The British responded to the revolt with the Peel Commission (1936–37), a public inquiry that recommended that an exclusively Jewish territory be created in the Galilee and western coast (including the population transfer of 225,000 Arabs); the rest becoming an exclusively Arab area. The two main Jewish leaders, Chaim Weizmann and David Ben-Gurion, had convinced the Zionist Congress to approve equivocally the Peel recommendations as a basis for more negotiation. The plan was rejected outright by the Palestinian Arab leadership and they renewed the revolt, which caused the British to abandon the plan as unworkable. Testifying before the Peel Commission, Weizmann said "There are in Europe 6,000,000 people ... for whom the world is divided into places where they cannot live and places where they cannot enter." In 1938, the US called an international conference to address the question of the vast numbers of Jews trying to escape Europe. Britain made its attendance contingent on Palestine being kept out of the discussion. No Jewish representatives were invited. The Nazis proposed their own solution: that the Jews of Europe be shipped to Madagascar (the Madagascar Plan). The agreement proved fruitless, and the Jews were stuck in Europe. With millions of Jews trying to leave Europe and every country closed to Jewish migration, the British decided to close Palestine. The White Paper of 1939, recommended that an independent Palestine, governed jointly by Arabs and Jews, be established within 10 years. The White Paper agreed to allow 75,000 Jewish immigrants into Palestine over the period 1940–44, after which migration would require Arab approval. Both the Arab and Jewish leadership rejected the White Paper. In March 1940 the British High Commissioner for Palestine issued an edict banning Jews from purchasing land in 95% of Palestine. Jews now resorted to illegal immigration: (Aliyah Bet or "Ha'apalah"), often organized by the Mossad Le'aliyah Bet and the Irgun. With no outside help and no countries ready to admit them, very few Jews managed to escape Europe between 1939 and 1945. Those caught by the British were mostly imprisoned in Mauritius. During the Second World War, the Jewish Agency worked to establish a Jewish army that would fight alongside the British forces. Churchill supported the plan but British military and government opposition led to its rejection. The British demanded that the number of Jewish recruits match the number of Arab recruits. In June 1940, Italy declared war on the British Commonwealth and sided with Germany. Within a month, Italian planes bombed Tel Aviv and Haifa, inflicting multiple casualties. In May 1941, the Palmach was established to defend the Yishuv against the planned Axis invasion through North Africa. The British refusal to provide arms to the Jews, even when Rommel's forces were advancing through Egypt in June 1942 (intent on occupying Palestine), and the 1939 White Paper led to the emergence of a Zionist leadership in Palestine that believed conflict with Britain was inevitable. Despite this, the Jewish Agency called on Palestine's Jewish youth to volunteer for the British Army. 30,000 Palestinian Jews and 12,000 Palestinian Arabs enlisted in the British armed forces during the war. In June 1944 the British agreed to create a Jewish Brigade that would fight in Italy. Approximately 1.5 million Jews around the world served in every branch of the allied armies, mainly in the Soviet and US armies. 200,000 Jews died serving in the Soviet army alone. A small group (about 200 activists), dedicated to resisting the British administration in Palestine, broke away from the Etzel (which advocated support for Britain during the war) and formed the "Lehi" (Stern Gang), led by Avraham Stern. In 1942, the USSR released the Revisionist Zionist leader Menachem Begin from the Gulag and he went to Palestine, taking command of the Etzel organization with a policy of increased conflict against the British. At about the same time Yitzhak Shamir escaped from the camp in Eritrea where the British were holding Lehi activists without trial, taking command of the Lehi (Stern Gang). Jews in the Middle East were also affected by the war. Most of North Africa came under Nazi control and many Jews were used as slaves. The 1941 pro-Axis coup in Iraq was accompanied by massacres of Jews. The Jewish Agency put together plans for a last stand in the event of Rommel invading Palestine (the Nazis planned to exterminate Palestine's Jews). Between 1939 and 1945, the Nazis, aided by local forces, led systematic efforts to kill every person of Jewish extraction in Europe (The Holocaust), causing the deaths of approximately 6 million Jews. A quarter of those killed were children. The Polish and German Jewish communities, which played an important role in defining the pre-1945 Jewish world, mostly ceased to exist. In the United States and Palestine, Jews of European origin became disconnected from their families and roots. As the Holocaust mainly affected Ashkenazi Jews, Sepharadi and Mizrahi Jews, who had been a minority, became a much more significant factor in the Jewish world. Those Jews who survived in central Europe, were displaced persons (refugees); an Anglo-American Committee of Inquiry, established to examine the Palestine issue, surveyed their ambitions and found that over 95% wanted to migrate to Palestine. In the Zionist movement the moderate Pro-British (and British citizen) Weizmann, whose son died flying in the RAF, was undermined by Britain's anti-Zionist policies. Leadership of the movement passed to the Jewish Agency in Palestine, now led by the anti-British Socialist-Zionist party (Mapai) led by David Ben-Gurion. The British Empire was severely weakened by the war. In the Middle East, the war had made Britain conscious of its dependence on Arab oil. Shortly after VE Day, the Labour Party won the general election in Britain. Although Labour Party conferences had for years called for the establishment of a Jewish state in Palestine, the Labour government now decided to maintain the 1939 White Paper policies. Illegal migration (Aliyah Bet) became the main form of Jewish entry into Palestine. Across Europe Bricha ("flight"), an organization of former partisans and ghetto fighters, smuggled Holocaust survivors from Eastern Europe to Mediterranean ports, where small boats tried to breach the British blockade of Palestine. Meanwhile, Jews from Arab countries began moving into Palestine overland. Despite British efforts to curb immigration, during the 14 years of the Aliyah Bet, over 110,000 Jews entered Palestine. By the end of World War II, the Jewish population of Palestine had increased to 33% of the total population. In an effort to win independence, Zionists now waged a guerrilla war against the British. The main underground Jewish militia, the Haganah, formed an alliance called the Jewish Resistance Movement with the Etzel and Stern Gang to fight the British. In June 1946, following instances of Jewish sabotage, such as in the Night of the Bridges, the British launched Operation Agatha, arresting 2,700 Jews, including the leadership of the Jewish Agency, whose headquarters were raided. Those arrested were held without trial. On 4 July 1946 a massive pogrom in Poland led to a wave of Holocaust survivors fleeing Europe for Palestine. Three weeks later, Irgun bombed the British Military Headquarters of the King David Hotel in Jerusalem, killing 91 people. In the days following the bombing, Tel Aviv was placed under curfew and over 120,000 Jews, nearly 20% of the Jewish population of Palestine, were questioned by the police. In the US, Congress criticized British handling of the situation and considered delaying loans that were vital to British post-war recovery. The alliance between Haganah and Etzel was dissolved after the King David bombings. Between 1945 and 1948, 100,000–120,000 Jews left Poland. Their departure was largely organized by Zionist activists under the umbrella of the semi-clandestine organization Berihah ("Flight"). Berihah was also responsible for the organized emigration of Jews from Romania, Hungary, Czechoslovakia and Yugoslavia, totalling 250,000 (including Poland) Holocaust survivors. The British imprisoned the Jews trying to enter Palestine in the Atlit detainee camp and Cyprus internment camps. Those held were mainly Holocaust survivors, including large numbers of children and orphans. In response to Cypriot fears that the Jews would never leave and because the 75,000 quota established by the 1939 White Paper had never been filled, the British allowed the refugees to enter Palestine at a rate of 750 per month. On 2 April 1947, the United Kingdom requested that the question of Palestine be handled by the General Assembly. The General Assembly created a committee, United Nations Special Committee on Palestine (UNSCOP), to report on "the question of Palestine". In July 1947 the UNSCOP visited Palestine and met with Jewish and Zionist delegations. The Arab Higher Committee boycotted the meetings. During the visit the British Foreign Secretary Ernest Bevin ordered that passengers from an Aliyah Bet ship, SS Exodus 1947, be sent back to Europe. The Holocaust surviving migrants on the ship were forcibly removed by British troops at Hamburg, Germany. The principal non-Zionist Orthodox Jewish (or Haredi) party, Agudat Israel, recommended to UNSCOP that a Jewish state be set up after reaching a religious status quo agreement with Ben-Gurion. The agreement granted an exemption from military service to a quota of yeshiva (religious seminary) students and to all Orthodox women, made the Sabbath the national weekend, guaranteed kosher food in government institutions and allowed Orthodox Jews to maintain a separate education system. The majority report of UNSCOP proposed "an independent Arab State, an independent Jewish State, and the City of Jerusalem", the last to be under "an International Trusteeship System". On 29 November 1947, in Resolution 181 (II), the General Assembly adopted the majority report of UNSCOP, but with slight modifications. The Plan also called for the British to allow "substantial" Jewish migration by 1 February 1948. Neither Britain nor the UN Security Council took any action to implement the recommendation made by the resolution and Britain continued detaining Jews attempting to enter Palestine. Concerned that partition would severely damage Anglo-Arab relations, Britain denied UN representatives access to Palestine during the period between the adoption of Resolution 181 (II) and the termination of the British Mandate. The British withdrawal was completed in May 1948. However, Britain continued to hold Jewish immigrants of "fighting age" and their families on Cyprus until March 1949. The General Assembly's vote caused joy in the Jewish community and anger in the Arab community. Violence broke out between the sides, escalating into civil war. From January 1948, operations became increasingly militarized, with the intervention of a number of Arab Liberation Army regiments inside Palestine, each active in a variety of distinct sectors around the different coastal towns. They consolidated their presence in Galilee and Samaria. Abd al-Qadir al-Husayni came from Egypt with several hundred men of the Army of the Holy War. Having recruited a few thousand volunteers, he organized the blockade of the 100,000 Jewish residents of Jerusalem. The Yishuv tried to supply the city using convoys of up to 100 armoured vehicles, but largely failed. By March, almost all Haganah's armoured vehicles had been destroyed, the blockade was in full operation, and hundreds of Haganah members who had tried to bring supplies into the city were killed. Up to 100,000 Arabs, from the urban upper and middle classes in Haifa, Jaffa and Jerusalem, or Jewish-dominated areas, evacuated abroad or to Arab centres eastwards. This situation caused the US to withdraw their support for the Partition plan, thus encouraging the Arab League to believe that the Palestinian Arabs, reinforced by the Arab Liberation Army, could put an end to the plan for partition. The British, on the other hand, decided on 7 February 1948 to support the annexation of the Arab part of Palestine by Transjordan. The Jordanian army was commanded by the British. David Ben-Gurion reorganized the Haganah and made conscription obligatory. Every Jewish man and woman in the country had to receive military training. Thanks to funds raised by Golda Meir from sympathisers in the United States, and Stalin's decision to support the Zionist cause, the Jewish representatives of Palestine were able to purchase important arms in Eastern Europe. Ben-Gurion gave Yigael Yadin the responsibility to plan for the announced intervention of the Arab states. The result of his analysis was Plan Dalet, in which Haganah passed from the defensive to the offensive. The plan sought to establish Jewish territorial continuity by conquering mixed zones. Tiberias, Haifa, Safed, Beisan, Jaffa and Acre fell, resulting in the flight of more than 250,000 Palestinian Arabs. On 14 May 1948, on the day the last British forces left Haifa, the Jewish People's Council gathered at the Tel Aviv Museum and proclaimed the establishment of a Jewish state, to be known as the State of Israel. State of Israel In 1948, following the 1947–1948 war in Mandatory Palestine, the Israeli Declaration of Independence sparked the 1948 Arab–Israeli War. This resulted in the 1948 Palestinian expulsion and flight from the land that the State of Israel came to control, and led to waves of Jewish immigration from other parts of the Middle East. The latter half of the 20th century saw further conflicts between Israel and its neighbouring Arab nations. In 1967, the Six-Day War erupted; in its aftermath, Israel captured and occupied the Golan Heights from Syria, the West Bank from Jordan, and the Gaza Strip and the Sinai Peninsula from Egypt. In 1973, the Yom Kippur War began with an attack by Egypt on the Israeli-occupied Sinai Peninsula. In 1979, the Egypt–Israel peace treaty was signed, based on the Camp David Accords. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian National Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite efforts to finalize the peace agreement, the conflict continues. Demographics See also Notes References Further reading External links Israeli settlementsTimeline, International law West BankJudea and Samaria Area Gaza StripHof Aza Regional Council
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-24] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://www.theverge.com/hbo] | [TOKENS: 1504]
HBO Originally a private cable network, then a premium cable channel, then a mini-network of specialized and dedicated channels, HBO has evolved into a powerhouse of original content production. Game of Thrones is its most obvious success story. But series like Deadwood, The Sopranos, Westworld, True Detective, The Wire, Sex and the City, Girls, and many more have made the network a strong contender in the crowded streaming landscape. With the launch of HBO Max, the network’s offerings have been paired with major titles from Warner Bros., DC, Studio Ghibli, and more. The Verge can help you sort through HBO’s new content and follow its evolving business model as it looks for its next Game of Thrones. HBO’s medical drama has been teasing out a smart story about what makes gen AI so tempting and concerning. Six years after its US debut, HBO Max is coming to the UK and Ireland on March 26th. Latest In HBO Though HBO still hasn’t announced a firm release date for House of the Dragon’s upcoming third season, there’s a new trailer teasing out Rhaenyra’s plan to make her enemies pay using her squad of newly-tamed dragons. The new season drops some time in June. Now, Paramount is offering to cover the $2.8 billion termination fee that Warner Bros. Discovery would owe Netflix for abandoning the $82.7 billion merger agreement. It’s also tossing in a $0.25 per share “ticking fee” that it would pay shareholders for every quarter its deal hasn’t closed beyond December 31st, 2026. [The Hollywood Reporter] That’s George R.R. Martin, in a big profile published by The Hollywood Reporter. Relatable. The piece also details new work being done on a potential Game of Thrones sequel series featuring Jon Snow; Arya Stark might be in it, too. But that show is in “very early development,” THR says. [The Hollywood Reporter] HBO’s new spinoff prequel takes a humorous look at the grimy lives of Westeros’ smallfolk. With the renewed “multi-year partnership,” A24 films will come first to HBO and HBO Max after leaving theaters. The two sides initially announced a deal in 2023. [Pressroom] Well, they’re not kids anymore, but things are not looking good for the cast of Euphoria in the new season 3 trailer, which sees just about everyone involved in some kind of criminal enterprise or... content creation. The series returns to HBO on April 12th. Last week, we talked about how Warner Bros. — quite reasonably! — had some agita about David Ellison’s bid for the company. Well, what do you know, David’s daddy is going to personally guarantee the offer. Also, since the last time we talked, Jared Kushner backed out of the nepo baby bid. [nytimes.com] We don’t even know who will own HBO Max this time next year, but the streamer is putting on a good face with this sizzle reel of new footage from upcoming seasons of House of the Dragon, Euphoria, Dune: Prophecy and Lanterns — all of which are coming in 2026. Netflix may be the frontrunner now, but the war for Warner Bros. could end in a number of different ways. Since Netflix announced that it was the frontrunner to buy Warner Bros., David Ellison’s Paramount Skydance has been getting more hostile in its bids to own the legacy studio. But Semafor reports that Paramount’s tactics have raised eyes in Washington, where some think Ellison is banking on favoritism from Trump’s Justice Department. [Semafor] After launching a hostile bid for the entertainment giant, Paramount’s Ellison told CNBC that Netflix’s deal to buy part of WBD would create a company with “unprecedented market power:” When you combine the number one streamer with the number three streamer, that creates a company that has unprecedented market power, north of 400 million subscribers. The next largest competitor is Disney, with just under 200 million. That’s bad for Hollywood, that’s bad for the creative community, that’s bad for consumers. Gorilla Grodd — DC Comics’ gorilla supervillain who has longstanding beef with the Flash — is set to be the focus of DC Crime, a new HBO Max “true” crime docuseries hosted by Daily Planet reporter Jimmy Olsen (Skyler Gisondo). No premiere date yet, but Tony Yacenda and Dan Perrault are attached to write / showrun the project. [Variety] Rather than waiting for Sunday, HBO now plans to drop It: Welcome to Derry’s second episode on October 31st just in time for the end of spooky season. [HBO Pressroom] The It prequel series is a half-hearted attempt at crossing Stephen King’s opus with Stranger Things. Weapons is set to make its HBO Max debut on October 24th, which might get folks intrigued about the spinoff prequel director Zach Cregger has in development. [HBO Press Room] The latest trailer for HBO’s Welcome to Derry series is chock full of alarming new scenes from the It prequel, but the most intriguing thing about it are its shots of what seems to be Pennywise’s arrival on Earth from space(?) in a massive ball of fire. According to The Wall Street Journal, the newly merged Paramount Skydance Corporation is thinking about making a majority cash bid to acquire the entirety of Warner Bros. Discovery — a move that would consolidate two of the world’s largest media conglomerates into a single entity run by billionaire Larry Ellison’s son. [The Wall Street Journal] When HBO’s new comedy The Chair Company premieres on October 12th, William Ronald Trosper (Tim Robinson) will set out to discover the truth behind “a far-reaching conspiracy” that has him questioning everything about his life. [HBO Pressroom] After months of teasing us all with snippets from It: Welcome to Derry, HBO has finally given the prequel series an October 26th premiere date. The streamer will add Your Name, Fortune Favors Lady Nikuko, Children Who Chase Lost Voices, Ghost Cat Anzu, and more titles starting September 1st. It’s also bringing Shin Godzilla and other films like the 1985 sci-fi flick Angel’s Egg to the platform later this year and into next. [Warner Bros. Discovery] Though technically, it was Max at the time. All but 200,000 of them came from international markets, bringing Max to 125.7 million subscribers worldwide — and according to Warner Bros. Discovery, still on track to hit 150 million next year. The streaming division turned a profit of $293 million, after a loss this time last year, which will be good news for the Warner Bros. bit of the company ahead of their separation next year. [s201.q4cdn.com] A collection of 15 shows will be available on HBO Max, Vulture reports, including the Ricky Gervais-led version of The Office. They’ll be available until the end of September. [vulture.com] Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://www.wired.com/story/government-docs-reveal-new-details-about-tesla-and-waymo-robotaxi-programs/] | [TOKENS: 2575]
Aarian MarshallGearFeb 20, 2026 4:53 PMGovernment Docs Reveal New Details About Tesla and Waymo Robotaxis’ Human BabysittersSelf-driving-vehicle companies are revealing new details about their safety-critical “remote assistance” programs—but questions remain.Photograph: Eli Hartman/Getty ImagesCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyAre self-driving vehicles really just big, remote-controlled cars, with nameless and faceless people in far-off call centers piloting the things from behind consoles? As the vehicles and their science-fiction-like software expand to more cities, the conspiracy theory has rocketed around group chats and TikToks. It’s been powered, in part, by the reluctance of self-driving car companies to talk in specifics about the humans who help make their robots go.But this month, in government documents submitted by Alphabet subsidiary Waymo and electric-auto maker Tesla, the companies have revealed more details about the people and programs that help the vehicles when their software gets confused.The details of these companies' “remote assistance” programs are important because the humans supporting the robots are critical in ensuring the cars are driving safely on public roads, industry experts say. Even robotaxis that run smoothly most of the time get into situations that their self-driving systems find perplexing. See, for example, a December power outage in San Francisco that killed stop lights around the city, stranding confused Waymos in several intersections. Or the ongoing government probes into several instances of these cars illegally blowing past stopped school buses unloading students in Austin, Texas. (The latter led Waymo to issue a software recall.) When this happens, humans get the cars out of the jam by directing or “advising” them from afar.These jobs are important because if people do them wrong, they can be the difference between, say, a car stopping for or running a red light. “For the foreseeable future, there will be people who play a role in the vehicles’ behavior, and therefore have a safety role to play,” says Philip Koopman, an autonomous-vehicle software and safety researcher at Carnegie Mellon University. One of the hardest safety problems associated with self-driving, he says, is building software that knows when to ask for human help.In other words: If you care about robot safety, pay attention to the people.The People of WaymoWaymo operates a paid robotaxi service in six metros—Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area—and has plans to launch in at least 10 more, including London, this year. Now, in a blog post and letter submitted to US senator Ed Markey this week, the company made public more aspects of what it calls its “remote assistance” (RA) program, which uses remote workers to respond to requests from Waymo’s vehicle software when it determines it needs help. These humans give data or advice to the systems, writes Ryan McNamara, Waymo's vice president and global head of operations. The system can use or reject the information that humans provide.“Waymo’s RA agents provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle,” McNamara writes—denying, implicitly, the charge that Waymos are simply remote-controlled cars. About 70 assistants are on duty at any given time to monitor some 3,000 robotaxis, the company says. The low ratio indicates the cars are doing much of the heavy lifting.Waymo also confirmed in its letter what an executive told Congress in a hearing earlier this month: Half of these remote assistance workers are contractors overseas, in the Philippines. (The company says it has two other remote assistance offices in Arizona and Michigan.) These workers are licensed to drive in the Philippines, McNamara writes, but are trained on US road rules. All remote assistance workers are drug- and alcohol-tested when they are hired, the company says, and 45 percent are drug-tested every three months as part of Waymo’s random testing program.The company says a highly trained US-based team handles the most complex remote interactions, including collisions, contacts with law enforcement and riders, and interactions with regulatory agencies. The company declined to comment beyond the details in its letter.Tesla’s Human BabysittersTesla has operated a small robotaxi service in Austin, Texas, since last June. The service started with human safety monitors sitting in the vehicles’ front passenger seats, ready to intervene if the tech went wrong. Last month, CEO Elon Musk said the company had started to take these monitors out of the front seats. He acknowledged that while the company did use “chase cars” to monitor and intervene with the software, it had started to operate some cars without that more direct human intervention. (A larger but still limited Tesla ride-hailing service in the Bay Area operates with human drivers behind the wheel.) But the company has not revealed much about the people who help its vehicles out of jams, or how they do the job.Now, in a filing submitted to the California Public Utilities Commission this week, Tesla AI technical program manager Dzuy Cao writes that Tesla runs two offices of “remote operators,” based in Austin and the Bay Area. (In a seeming dig at Waymo’s Philippines revelations, Cao emphasizes that it “requires that its remote operators be located domestically.”) The company says these operators undergo “extensive” background checks and drug and alcohol testing, and have valid US driver’s licenses.But Tesla still hasn’t revealed how often these operators intervene with its self-driving technology, and exactly how they do it. Tesla didn’t respond to WIRED’s request for comment.The details of these remote programs could determine whether self-driving cars actually keep others on the road out of harm’s way. “If there’s a person who can make a mistake that can result in or contribute to a crash, then you have a safety issue you have to deal with,” Koopman says. Government Docs Reveal New Details About Tesla and Waymo Robotaxis’ Human Babysitters Are self-driving vehicles really just big, remote-controlled cars, with nameless and faceless people in far-off call centers piloting the things from behind consoles? As the vehicles and their science-fiction-like software expand to more cities, the conspiracy theory has rocketed around group chats and TikToks. It’s been powered, in part, by the reluctance of self-driving car companies to talk in specifics about the humans who help make their robots go. But this month, in government documents submitted by Alphabet subsidiary Waymo and electric-auto maker Tesla, the companies have revealed more details about the people and programs that help the vehicles when their software gets confused. The details of these companies' “remote assistance” programs are important because the humans supporting the robots are critical in ensuring the cars are driving safely on public roads, industry experts say. Even robotaxis that run smoothly most of the time get into situations that their self-driving systems find perplexing. See, for example, a December power outage in San Francisco that killed stop lights around the city, stranding confused Waymos in several intersections. Or the ongoing government probes into several instances of these cars illegally blowing past stopped school buses unloading students in Austin, Texas. (The latter led Waymo to issue a software recall.) When this happens, humans get the cars out of the jam by directing or “advising” them from afar. These jobs are important because if people do them wrong, they can be the difference between, say, a car stopping for or running a red light. “For the foreseeable future, there will be people who play a role in the vehicles’ behavior, and therefore have a safety role to play,” says Philip Koopman, an autonomous-vehicle software and safety researcher at Carnegie Mellon University. One of the hardest safety problems associated with self-driving, he says, is building software that knows when to ask for human help. In other words: If you care about robot safety, pay attention to the people. The People of Waymo Waymo operates a paid robotaxi service in six metros—Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area—and has plans to launch in at least 10 more, including London, this year. Now, in a blog post and letter submitted to US senator Ed Markey this week, the company made public more aspects of what it calls its “remote assistance” (RA) program, which uses remote workers to respond to requests from Waymo’s vehicle software when it determines it needs help. These humans give data or advice to the systems, writes Ryan McNamara, Waymo's vice president and global head of operations. The system can use or reject the information that humans provide. “Waymo’s RA agents provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle,” McNamara writes—denying, implicitly, the charge that Waymos are simply remote-controlled cars. About 70 assistants are on duty at any given time to monitor some 3,000 robotaxis, the company says. The low ratio indicates the cars are doing much of the heavy lifting. Waymo also confirmed in its letter what an executive told Congress in a hearing earlier this month: Half of these remote assistance workers are contractors overseas, in the Philippines. (The company says it has two other remote assistance offices in Arizona and Michigan.) These workers are licensed to drive in the Philippines, McNamara writes, but are trained on US road rules. All remote assistance workers are drug- and alcohol-tested when they are hired, the company says, and 45 percent are drug-tested every three months as part of Waymo’s random testing program. The company says a highly trained US-based team handles the most complex remote interactions, including collisions, contacts with law enforcement and riders, and interactions with regulatory agencies. The company declined to comment beyond the details in its letter. Tesla’s Human Babysitters Tesla has operated a small robotaxi service in Austin, Texas, since last June. The service started with human safety monitors sitting in the vehicles’ front passenger seats, ready to intervene if the tech went wrong. Last month, CEO Elon Musk said the company had started to take these monitors out of the front seats. He acknowledged that while the company did use “chase cars” to monitor and intervene with the software, it had started to operate some cars without that more direct human intervention. (A larger but still limited Tesla ride-hailing service in the Bay Area operates with human drivers behind the wheel.) But the company has not revealed much about the people who help its vehicles out of jams, or how they do the job. Now, in a filing submitted to the California Public Utilities Commission this week, Tesla AI technical program manager Dzuy Cao writes that Tesla runs two offices of “remote operators,” based in Austin and the Bay Area. (In a seeming dig at Waymo’s Philippines revelations, Cao emphasizes that it “requires that its remote operators be located domestically.”) The company says these operators undergo “extensive” background checks and drug and alcohol testing, and have valid US driver’s licenses. But Tesla still hasn’t revealed how often these operators intervene with its self-driving technology, and exactly how they do it. Tesla didn’t respond to WIRED’s request for comment. The details of these remote programs could determine whether self-driving cars actually keep others on the road out of harm’s way. “If there’s a person who can make a mistake that can result in or contribute to a crash, then you have a safety issue you have to deal with,” Koopman says. Comments You Might Also Like In your inbox: Upgrade your life with WIRED-tested gear A wave of unexplained bot traffic is sweeping the web Big Story: The women training for pregnancy like it’s a marathon Iran’s digital surveillance machine is almost complete Listen: Silicon Valley tech workers are trying to stop ICE Wired Coupons Squarespace Promo Code: 20% Off Annual Acuity Subscriptions Laptop - $400 Off LG Promo Code 10% Off Dell Coupon Code for New Customers 30% Samsung Coupon - Offer Program 2026 10% Off Canon Promo Code + Up to 30% Off 50% Off Doordash Promo Code For New & Existing Users © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://en.wikipedia.org/wiki/The_New_York_Times#Unionization] | [TOKENS: 13653]
Contents The New York Times The New York Times (NYT)[b] is a newspaper based in Manhattan, New York City. The New York Times covers domestic, national, and international news, and publishes opinion pieces and reviews. As one of the longest-running newspapers in the United States, the Times serves as one of the country's newspapers of record. As of August 2025[update], The New York Times had 11.88 million total and 11.3 million online subscribers, both by significant margins the highest numbers for any newspaper in the United States; the total also included 580,000 print subscribers. The New York Times is published by the New York Times Company; since 1896, the company has been chaired by the Ochs-Sulzberger family, whose current chairman and the paper's publisher is A. G. Sulzberger. The Times is headquartered at The New York Times Building in Midtown Manhattan. The Times was founded as the conservative New-York Daily Times in 1851, and came to national recognition in the 1870s with its aggressive coverage of corrupt politician Boss Tweed. Following the Panic of 1893, Chattanooga Times publisher Adolph Ochs gained a controlling interest in the company. In 1935, Ochs was succeeded by his son-in-law, Arthur Hays Sulzberger, who began a push into European news. Sulzberger's son Arthur Ochs Sulzberger became publisher in 1963, adapting to a changing newspaper industry and introducing radical changes. The New York Times was involved in the landmark 1964 U.S. Supreme Court case New York Times Co. v. Sullivan, which restricted the ability of public officials to sue the media for defamation. In 1971, The New York Times published the Pentagon Papers, an internal Department of Defense document detailing the United States's historical involvement in the Vietnam War, despite pushback from then-president Richard Nixon. In the landmark decision New York Times Co. v. United States (1971), the Supreme Court ruled that the First Amendment guaranteed the right to publish the Pentagon Papers. In the 1980s, the Times began a two-decade progression to digital technology and launched nytimes.com in 1996. In the 21st century, it shifted its publication online amid the global decline of newspapers. Currently, the Times maintains several regional bureaus staffed with journalists across six continents. It has expanded to several other publications, including The New York Times Magazine, The New York Times International Edition, and The New York Times Book Review. In addition, the paper has produced several television series, podcasts—including The Daily—and games through The New York Times Games. The New York Times has been involved in a number of controversies in its history. Among other accolades, it has been awarded the Pulitzer Prize 135 times since 1918, the most of any publication. According to a 2025 Pew Research Center study on educational differences among audiences of 30 major U.S. news outlets, The New York Times had the highest proportion of college-educated readers among the daily newspapers surveyed, with 56% of its audience holding at least a bachelor's degree. History The New York Times was established in 1851 as the New-York Daily Times by New-York Tribune journalists Henry Jarvis Raymond and George Jones. The Times experienced significant circulation, particularly among conservatives; New-York Tribune publisher Horace Greeley praised the Times. During the American Civil War, Times correspondents gathered information directly from Confederate states. In 1869, Jones inherited the paper from Raymond, who had changed its name to The New-York Times. Under Jones, the Times began to publish a series of articles criticizing Tammany Hall political boss William M. Tweed, despite vehement opposition from other New York newspapers. In 1871, The New-York Times published Tammany Hall's accounting books; Tweed was tried in 1873 and sentenced to twelve years in prison. The Times earned national recognition for its coverage of Tweed. In 1891, Jones died, creating a management imbroglio in which his children had insufficient business acumen to inherit the company and his will prevented an acquisition of the Times. Editor-in-chief Charles Ransom Miller, editorial editor Edward Cary, and correspondent George F. Spinney established a company to manage The New-York Times, but faced financial difficulties during the Panic of 1893. In August 1896, Chattanooga Times publisher Adolph Ochs acquired The New-York Times, implementing significant alterations to the newspaper's structure. Ochs established the Times as a merchant's newspaper and removed the hyphen from the newspaper's name. In 1905, The New York Times opened Times Tower, marking expansion. The Times experienced a political realignment in the 1910s amid several disagreements within the Republican Party. The New York Times reported on the sinking of the Titanic, as other newspapers were cautious about bulletins circulated by the Associated Press. Through managing editor Carr Van Anda, the Times paid considerable attention to advances in science, reporting on Albert Einstein's then-obscure theory of general relativity and becoming involved in the discovery of the tomb of Tutankhamun. In April 1935, Ochs died, leaving his son-in-law Arthur Hays Sulzberger as publisher. The Great Depression forced Sulzberger to reduce The New York Times's operations, and developments in the New York newspaper landscape resulted in the formation of larger newspapers, such as the New York Herald Tribune and the New York World-Telegram. In contrast to Ochs, Sulzberger encouraged wirephotography. The New York Times extensively covered World War II through large headlines, reporting on exclusive stories such as the Yugoslav coup d'état. Amid the war, Sulzberger began expanding the Times's operations further, acquiring WQXR-FM in 1944—the first non-Times investment since the Jones era—and established a fashion show in Times Hall. Despite reductions as a result of conscription, The New York Times retained the largest journalism staff of any newspaper. The Times's print edition became available internationally during the war through the Army & Air Force Exchange Service; The New York Times Overseas Weekly later became available in Japan through The Asahi Shimbun and in Germany through the Frankfurter Zeitung. The international edition would develop into a separate newspaper. Journalist William L. Laurence publicized the atomic bomb race between the United States and Germany, resulting in the Federal Bureau of Investigation seizing copies of the Times. The United States government recruited Laurence to document the Manhattan Project in April 1945. Laurence became the only witness of the Manhattan Project, a detail realized by employees of The New York Times following the atomic bombing of Hiroshima. Following World War II, The New York Times continued to expand. The Times was subject to investigations from the Senate Internal Security Subcommittee, a McCarthyist subcommittee that investigated purported communism from within press institutions. Arthur Hays Sulzberger's decision to dismiss a copyreader who had pleaded the Fifth Amendment drew ire from within the Times and from external organizations. In April 1961, Sulzberger resigned, appointing his son-in-law, The New York Times Company president Orvil Dryfoos. Under Dryfoos, The New York Times established a newspaper based in Los Angeles. In 1962, the implementation of automated printing presses in response to increasing costs mounted fears over technological unemployment. The New York Typographical Union staged a strike in December, altering the media consumption of New Yorkers. The strike left New York with three remaining newspapers—the Times, the Daily News, and the New York Post—by its conclusion in March 1963. In May, Dryfoos died of a heart ailment. Following weeks of ambiguity, Arthur Ochs Sulzberger became The New York Times's publisher. Technological advancements leveraged by newspapers such as the Los Angeles Times and improvements in coverage from The Washington Post and The Wall Street Journal necessitated adaptations to nascent computing. The New York Times published "Heed Their Rising Voices" in 1960, a full-page advertisement purchased by supporters of Martin Luther King Jr. criticizing law enforcement in Montgomery, Alabama for their response to the civil rights movement. Montgomery Public Safety commissioner L. B. Sullivan sued the Times for defamation. In New York Times Co. v. Sullivan (1964), the U.S. Supreme Court ruled that the verdict in Alabama county court and the Supreme Court of Alabama violated the First Amendment. The decision is considered to be landmark. After financial losses, The New York Times ended its international edition, acquiring a stake in the Paris Herald Tribune, forming the International Herald Tribune. The Times initially published the Pentagon Papers, facing opposition from then-president Richard Nixon. The Supreme Court ruled in The New York Times's favor in New York Times Co. v. United States (1971), allowing the Times and The Washington Post to publish the papers. The New York Times remained cautious in its initial coverage of the Watergate scandal. As Congress began investigating the scandal, the Times furthered its coverage, publishing details on the Huston Plan, alleged wiretapping of reporters and officials, and testimony from James W. McCord Jr. that the Committee for the Re-Election of the President paid the conspirators off. The exodus of readers to suburban New York newspapers, such as Newsday and Gannett papers, adversely affected The New York Times's circulation. Contemporary newspapers balked at additional sections; Time devoted a cover for its criticism and New York wrote that the Times was engaging in "middle-class self-absorption". The New York Times, the Daily News, and the New York Post were the subject of a strike in 1978, allowing emerging newspapers to leverage halted coverage. The Times deliberately avoided coverage of the AIDS epidemic, running its first front-page article in May 1983. Max Frankel's editorial coverage of the epidemic, with mentions of anal intercourse, contrasted with then-executive editor A. M. Rosenthal's puritan approach, intentionally avoiding descriptions of the luridity of gay venues. Following years of waning interest in The New York Times, Sulzberger resigned in January 1992, appointing his son, Arthur Ochs Sulzberger Jr., as publisher. The Internet represented a generational shift within the Times; Sulzberger, who negotiated The New York Times Company's acquisition of The Boston Globe in 1993, derided the Internet, while his son expressed antithetical views. @times appeared on America Online's website in May 1994 as an extension of The New York Times, featuring news articles, film reviews, sports news, and business articles. Despite opposition, several employees of the Times had begun to access the Internet. The online success of publications that traditionally co-existed with the Times—such as America Online, Yahoo, and CNN—and the expansion of websites such as Monster.com and Craigslist that threatened The New York Times's classified advertisement model increased efforts to develop a website. nytimes.com debuted on January 19 and was formally announced three days later. The Times published domestic terrorist Ted Kaczynski's essay Industrial Society and Its Future in 1995, contributing to his arrest after his brother David recognized the essay's penmanship. Following the establishment of nytimes.com, The New York Times retained its journalistic hesitancy under executive editor Joseph Lelyveld, refusing to publish an article reporting on the Clinton–Lewinsky scandal from Drudge Report. nytimes.com editors conflicted with print editors on several occasions, including wrongfully naming security guard Richard Jewell as the suspect in the Centennial Olympic Park bombing and covering the death of Diana, Princess of Wales in greater detail than the print edition. The New York Times Electronic Media Company was adversely affected by the dot-com crash. The Times extensively covered the September 11 attacks. The following day's print issue contained sixty-six articles, the work of over three hundred dispatched reporters. Journalist Judith Miller was the recipient of a package containing a white powder during the 2001 anthrax attacks, furthering anxiety within The New York Times. In September 2002, Miller and military correspondent Michael R. Gordon wrote an article for the Times claiming that Iraq had purchased aluminum tubes. The article was cited by then-president George W. Bush to claim that Iraq was constructing weapons of mass destruction; the theoretical use of aluminum tubes to produce nuclear material was speculation. In March 2003, the United States invaded Iraq, beginning the Iraq War. The New York Times attracted controversy after thirty-six articles from journalist Jayson Blair were discovered to be plagiarized. Criticism over then-executive editor Howell Raines and then-managing editor Gerald M. Boyd mounted following the scandal, culminating in a town hall in which a deputy editor criticized Raines for failing to question Blair's sources in article he wrote on the D.C. sniper attacks. In June 2003, Raines and Boyd resigned. Arthur Ochs Sulzberger Jr. appointed Bill Keller as executive editor. Miller continued to report on the Iraq War as a journalistic embed covering the country's weapons of mass destruction program. Keller and then-Washington bureau chief Jill Abramson unsuccessfully attempted to subside criticism. Conservative media criticized the Times over its coverage of missing explosives from the Al Qa'qaa weapons facility. An article in December 2005 disclosing warrantless surveillance by the National Security Agency contributed to further criticism from the George W. Bush administration and the Senate's refusal to renew the Patriot Act. In the Plame affair, a Central Intelligence Agency inquiry found that Miller had become aware of Valerie Plame's identity through then-vice president Dick Cheney's chief of staff Scooter Libby, resulting in Miller's resignation. During the Great Recession, The New York Times suffered significant fiscal difficulties as a consequence of the subprime mortgage crisis and a decline in classified advertising. Exacerbated by Rupert Murdoch's revitalization of The Wall Street Journal through his acquisition of Dow Jones & Company, The New York Times Company began enacting measures to reduce the newsroom budget. The company was forced to borrow $250 million (equivalent to $373.84 million in 2025) from Mexican billionaire Carlos Slim and fired over one hundred employees by 2010. nytimes.com's coverage of the Eliot Spitzer prostitution scandal, resulting in the resignation of then-New York governor Eliot Spitzer, furthered the legitimacy of the website as a journalistic medium. The Times's economic downturn renewed discussions of an online paywall; The New York Times implemented a paywall in March 2011. Abramson succeeded Keller, continuing her characteristic investigations into corporate and government malfeasance into the Times's coverage. Following conflicts with newly appointed chief executive Mark Thompson's ambitions, Abramson was dismissed by Sulzberger Jr., who named Dean Baquet as her replacement. Leading up to the 2016 presidential election, The New York Times elevated the Hillary Clinton email controversy into a national issue. Donald Trump's upset victory contributed to an increase in subscriptions to the Times. The New York Times experienced unprecedented indignation from Trump, who referred to publications such as the Times as "enemies of the people" at the Conservative Political Action Conference and tweeted his disdain for the newspaper and CNN. In October 2017, The New York Times published an article by journalists Jodi Kantor and Megan Twohey alleging that dozens of women had accused film producer and The Weinstein Company co-chairman Harvey Weinstein of sexual misconduct. The investigation resulted in Weinstein's resignation and conviction, precipitated the Weinstein effect, and served as a catalyst for the #MeToo movement. The New York Times Company vacated the public editor position and eliminated the copy desk in November. Sulzberger Jr. announced his resignation in December 2017, appointing his son, A. G. Sulzberger, as publisher. Trump's relationship—equally diplomatic and negative—marked Sulzberger's tenure. In September 2018, The New York Times published "I Am Part of the Resistance Inside the Trump Administration", an anonymous essay by a self-described Trump administration official later revealed to be Department of Homeland Security chief of staff Miles Taylor. The animosity—which extended to nearly three hundred instances of Trump disparaging the Times by May 2019—culminated in Trump ordering federal agencies to cancel their subscriptions to The New York Times and The Washington Post in October 2019. Trump's tax returns have been the subject of three separate investigations.[c] During the COVID-19 pandemic, the Times began implementing data services and graphs. On May 23, 2020, The New York Times's front page solely featured U.S. Deaths Near 100,000, An Incalculable Loss, a subset of the 100,000 people in the United States who died of COVID-19, the first time that the Times's front page lacked images since they were introduced. Since 2020, The New York Times has focused on broader diversification, developing online games and producing television series. The New York Times Company acquired The Athletic in January 2022. Organization Since 1896, The New York Times has been published by the Ochs-Sulzberger family, having previously been published by Henry Jarvis Raymond until 1869 and by George Jones until 1896. Adolph Ochs published the Times until his death in 1935, when he was succeeded by his son-in-law, Arthur Hays Sulzberger. Sulzberger was publisher until 1961 and was succeeded by Orvil Dryfoos, his son-in-law, who served in the position until his death in 1963. Arthur Ochs Sulzberger succeeded Dryfoos until his resignation in 1992. His son, Arthur Ochs Sulzberger Jr., served as publisher until 2018. The New York Times's current publisher is A. G. Sulzberger, Sulzberger Jr.'s son. As of 2023, the Times's executive editor is Joseph Kahn and the paper's managing editors are Marc Lacey and Carolyn Ryan, having been appointed in June 2022. The New York Times's deputy managing editors are Sam Dolnick, Monica Drake, and Steve Duenes, and the paper's assistant managing editors are Matthew Ericson, Jonathan Galinsky, Hannah Poferl, Sam Sifton, Karron Skog, and Michael Slackman. The New York Times is owned by The New York Times Company, a publicly traded company. The New York Times Company, in addition to the Times, owns Wirecutter, The Athletic, The New York Times Cooking, and The New York Times Games, and acquired Serial Productions and Audm. The New York Times Company holds undisclosed minority investments in multiple other businesses, and formerly owned The Boston Globe and several radio and television stations. The New York Times Company is majority-owned by the Ochs-Sulzberger family through elevated shares in the company's dual-class stock structure held largely in a trust, in effect since the 1950s; as of 2022, the family holds ninety-five percent of The New York Times Company's Class B shares, allowing it to elect seventy percent of the company's board of directors. Class A shareholders have restrictive voting rights. As of 2023, The New York Times Company's chief executive is Meredith Kopit Levien, the company's former chief operating officer who was appointed in September 2020. As of March 2023, The New York Times Company employs 5,800 individuals, including 1,700 journalists according to deputy managing editor Sam Dolnick. Journalists for The New York Times may not run for public office, provide financial support to political candidates or causes, endorse candidates, or demonstrate public support for causes or movements. Journalists are subject to the guidelines established in "Ethical Journalism" and "Guidelines on Integrity". According to the former, Times journalists must abstain from using sources with a personal relationship to them and must not accept reimbursements or inducements from individuals who may be written about in The New York Times, with exceptions for gifts of nominal value. The latter requires attribution and exact quotations, though exceptions are made for linguistic anomalies. Staff writers are expected to ensure the veracity of all written claims, but may delegate researching obscure facts to the research desk. In March 2021, the Times established a committee to avoid journalistic conflicts of interest with work written for The New York Times, following columnist David Brooks's resignation from the Aspen Institute for his undisclosed work on the initiative Weave. The New York Times editorial board was established in 1896 by Adolph Ochs. With the opinion department, the editorial board is independent of the newsroom. Then-editor-in-chief Charles Ransom Miller served as opinion editor from 1883 until his death in 1922. Rollo Ogden succeeded Miller until his death in 1937. From 1937 to 1938, John Huston Finley served as opinion editor; in a prearranged plan, Charles Merz succeeded Finley. Merz served in the position until his retirement in 1961. John Bertram Oakes served as opinion editor from 1961 to 1976, when then-publisher Arthur Ochs Sulzberger appointed Max Frankel. Frankel served in the position until 1986, when he was appointed as executive editor. Jack Rosenthal was the opinion editor from 1986 to 1993. Howell Raines succeeded Rosenthal until 2001, when he was made executive editor. Gail Collins succeeded Raines until her resignation in 2006. From 2007 to 2016, Andrew Rosenthal was the opinion editor. James Bennet succeeded Rosenthal until his resignation in 2020. As of July 2024[update], the editorial board comprises thirteen opinion writers. The New York Times's opinion editor is Kathleen Kingsbury and the deputy opinion editor is Patrick Healy. The New York Times's editorial board was initially opposed to liberal beliefs, opposing women's suffrage in 1900 and 1914. The editorial board began to espouse progressive beliefs during Oakes's tenure, conflicting with the Ochs-Sulzberger family, of which Oakes was a member as Adolph Ochs's nephew; in 1976, Oakes publicly disagreed with Sulzberger's endorsement of Daniel Patrick Moynihan over Bella Abzug in the 1976 Senate Democratic primaries in a letter sent from Martha's Vineyard. Under Rosenthal, the editorial board took positions supporting assault weapons legislation and the legalization of marijuana, but publicly criticized the Obama administration over its portrayal of terrorism. In presidential elections, The New York Times has endorsed a total of twelve Republican candidates and thirty-two Democratic candidates, and has endorsed the Democrat in every election since 1960.[j] With the exception of Wendell Willkie, Republicans endorsed by the Times have won the presidency. In 2016, the editorial board issued an anti-endorsement against Donald Trump for the first time in its history. In February 2020, the editorial board reduced its presence from several editorials each day to occasional editorials for events deemed particularly significant. Since August 2024, the board no longer endorses candidates in local or congressional races in New York. Since 1940, editorial, media, and technology workers of The New York Times have been represented by the New York Times Guild. The Times Guild, along with the Times Tech Guild, are represented by the NewsGuild-CWA. In 1940, Arthur Hays Sulzberger was called upon by the National Labor Relations Board amid accusations that he had discouraged Guild membership in the Times. Over the next few years, the Guild would ratify several contracts, expanding to editorial and news staff in 1942 and maintenance workers in 1943. The New York Times Guild has walked out several times in its history, including for six and a half hours in 1981 and in 2017, when copy editors and reporters walked out at lunchtime in response to the elimination of the copy desk. On December 7, 2022, the union held a one-day strike, the first interruption to The New York Times since 1978. The New York Times Guild reached an agreement in May 2023 to increase minimum salaries for employees and a retroactive bonus. The Times Tech Guild is the largest technology union with collective bargaining rights in the United States. The guild held a second strike beginning on November 4, 2024, threatening the Times's coverage of the 2024 United States presidential election. Content As of August 2025, The New York Times has 11.8 million subscribers, with 11.3 million online-only subscribers and 580,000 print subscribers. The New York Times Company intends to have 15 million subscribers by 2027. The Times's shift towards subscription-based revenue with the debut of an online paywall in 2011 contributed to subscription revenue exceeding advertising revenue the following year, furthered by the 2016 presidential election and Donald Trump. In 2022, Vox wrote that The New York Times's subscribers skew "older, richer, whiter, and more liberal"; to reflect the general population of the United States, the Times has attempted to alter its audience by acquiring The Athletic, investing in verticals such as The New York Times Games, and beginning a marketing campaign showing diverse subscribers to the Times. The New York Times Company chief executive Meredith Kopit Levien stated that the average age of subscribers has remained constant. In October 2001, The New York Times began publishing DealBook, a financial newsletter edited by Andrew Ross Sorkin. The Times had intended to publish the newsletter in September, but delayed its debut following the September 11 attacks. A website for DealBook was established in March 2006. The New York Times began shifting towards DealBook as part of the newspaper's financial coverage in November 2010 with a renewed website and a presence in the Times's print edition. In 2011, the Times began hosting the DealBook Summit, an annual conference hosted by Sorkin. During the COVID-19 pandemic, The New York Times hosted the DealBook Online Summit in 2020 and 2021. The 2022 DealBook Summit featured—among other speakers—former vice president Mike Pence and Israeli prime minister Benjamin Netanyahu, culminating in an interview with former FTX chief executive Sam Bankman-Fried; FTX had filed for bankruptcy several weeks prior. The 2023 DealBook Summit's speakers included vice president Kamala Harris, Israeli president Isaac Herzog, and businessman Elon Musk. In June 2010, The New York Times licensed the political blog FiveThirtyEight in a three-year agreement. The blog, written by Nate Silver, had garnered attention during the 2008 presidential election for predicting the elections in forty-nine of fifty states. FiveThirtyEight appeared on nytimes.com in August. According to Silver, several offers were made for the blog; Silver wrote that a merger of unequals must allow for editorial sovereignty and resources from the acquirer, comparing himself to Groucho Marx. According to The New Republic, FiveThirtyEight drew as much as a fifth of the traffic to nytimes.com during the 2012 presidential election. In July 2013, FiveThirtyEight was sold to ESPN. In an article following Silver's exit, public editor Margaret Sullivan wrote that he was disruptive to the Times's culture for his perspective on probability-based predictions and scorn for polling—having stated that punditry is "fundamentally useless", comparing him to Billy Beane, who implemented sabermetrics in baseball. According to Sullivan, his work was criticized by several notable political journalists. The New Republic obtained a memo in November 2013 revealing then-Washington bureau chief David Leonhardt's ambitions to establish a data-driven newsletter with presidential historian Michael Beschloss, graphic designer Amanda Cox, economist Justin Wolfers, and The New Republic journalist Nate Cohn. By March, Leonhardt had amassed fifteen employees from within The New York Times; the newsletter's staff included individuals who had created the Times's dialect quiz, fourth down analyzer, and a calculator for determining buying or renting a home. The Upshot debuted in April 2014. Fast Company reviewed an article about Illinois Secure Choice—a state-funded retirement saving system—as "neither a terse news item, nor a formal financial advice column, nor a politically charged response to economic policy", citing its informal and neutral tone. The Upshot developed "the needle" for the 2016 presidential election and 2020 presidential elections, a thermometer dial displaying the probability of a candidate winning. In January 2016, Cox was named editor of The Upshot. Kevin Quealy was named editor in June 2022. The New York Times has said it is perceived as a liberal newspaper. An analysis by Pew Research Center in October 2014 placed the Times readership as ideologically liberal based on a scale of 10 political values questions. According to an internal readership poll conducted by The New York Times in 2019, eighty-four percent of readers identified as liberal. The New York Times has struggled internally with how to balance its coverage, dismissing criticism from the left for "sanewashing" right-wing viewpoints in its coverage of Donald Trump. In covering Israel's war on the Gaza Strip that began in 2023, The New York Times instructed its reporters to restrict use of the terms 'Palestine', 'genocide', and 'refugee camps' to specific usages, with data analysis showing a pattern of articles emphasizing Israeli civilians killed by Palestinians over a much larger number of Palestinian civilians killed by Israelis. The group Writers Against the War on Gaza wrote in the blog Mondoweiss that this has contrasted with The New York Times coverage of Russia's invasion of Ukraine, in which Russia is considered a threat to U.S. foreign policy interests, while Israel is considered an ally. In February 1942, The New York Times crossword debuted in The New York Times Magazine; according to Richard Shepard, the attack on Pearl Harbor in December 1941 convinced then-publisher Arthur Hays Sulzberger of the necessity of a crossword. The New York Times has published recipes since the 1850s and has had a separate food section since the 1940s. In 1961, restaurant critic Craig Claiborne published The New York Times Cookbook, an unauthorized cookbook that drew from the Times's recipes. Since 2010, former food editor Amanda Hesser has published The Essential New York Times Cookbook, a compendium of recipes from The New York Times. The Innovation Report in 2014 revealed that the Times had attempted to establish a cooking website since 1998, but faced difficulties with the absence of a defined data structure. In September 2014, The New York Times introduced NYT Cooking, an application and website. Edited by food editor Sam Sifton, the Times's cooking website features 21,000 recipes as of 2022. NYT Cooking features videos as part of an effort by Sifton to hire two former Tasty employees from BuzzFeed. In August 2023, NYT Cooking added personalized recommendations through the cosine similarity of text embeddings of recipe titles. The website also features no-recipe recipes, a concept proposed by Sifton. In May 2016, The New York Times Company announced a partnership with startup Chef'd to form a meal delivery service that would deliver ingredients from The New York Times Cooking recipes to subscribers; Chef'd shut down in July 2018 after failing to accrue capital and secure financing. The Hollywood Reporter reported in September 2022 that the Times would expand its delivery options to US$95 cooking kits curated by chefs such as Nina Compton, Chintan Pandya, and Naoko Takei Moore. That month, the staff of NYT Cooking went on tour with Compton, Pandya, and Moore in Los Angeles, New Orleans, and New York City, culminating in a food festival. In addition, The New York Times offered its own wine club originally operated by the Global Wine Company. The New York Times Wine Club was established in August 2009, during a dramatic decrease in advertising revenue. By 2021, the wine club was managed by Lot18, a company that provides proprietary labels. Lot18 managed the Williams Sonoma Wine Club and its own wine club Tasting Room. The New York Times archives its articles in a basement annex beneath its building known as "the morgue", a venture started by managing editor Carr Van Anda in 1907. The morgue comprises news clippings, a pictures library, and the Times's book and periodicals library. As of 2014, it is the largest library of any media company, dating back to 1851. In November 2018, The New York Times partnered with Google to digitize the Archival Library. Additionally, The New York Times has maintained a virtual microfilm reader known as TimesMachine since 2014. The service launched with archives from 1851 to 1980; in 2016, TimesMachine expanded to include archives from 1981 to 2002. The Times built a pipeline to take in TIFF images, article metadata in XML and an INI file of Cartesian geometry describing the boundaries of the page, and convert it into a PNG of image tiles and JSON containing the information in the XML and INI files. The image tiles are generated using GDAL and displayed using Leaflet, using data from a content delivery network. The Times ran optical character recognition on the articles using Tesseract and shingled and fuzzy string matched the result. The New York Times uses a proprietary content management system known as Scoop for its online content and the Microsoft Word-based content management system CCI for its print content. Scoop was developed in 2008 to serve as a secondary content management system for editors working in CCI to publish their content on the Times's website; as part of The New York Times's online endeavors, editors now write their content in Scoop and send their work to CCI for print publication. Since its introduction, Scoop has superseded several processes within the Times, including print edition planning and collaboration, and features tools such as multimedia integration, notifications, content tagging, and drafts. The New York Times uses private articles for high-profile opinion pieces, such as those written by Russian president Vladimir Putin and actress Angelina Jolie, and for high-level investigations. In January 2012, the Times released Integrated Content Editor (ICE), a revision tracking tool for WordPress and TinyMCE. ICE is integrated within the Times's workflow by providing a unified text editor for print and online editors, reducing the divide between print and online operations. By 2017, The New York Times began developing a new authoring tool to its content management system known as Oak, in an attempt to further the Times's visual efforts in articles and reduce the discrepancy between the mediums in print and online articles. The system reduces the input of editors and supports additional visual mediums in an editor that resembles the appearance of the article. Oak is based on ProseMirror, a JavaScript rich-text editor toolkit, and retains the revision tracking and commenting functionalities of The New York Times's previous systems. Additionally, Oak supports predefined article headers. In 2019, Oak was updated to support collaborative editing using Firebase to update editors's cursor status. Several Google Cloud Functions and Google Cloud Tasks allow articles to be previewed as they will be printed, and the Times's primary MySQL database is regularly updated to update editors on the article status. Style and design Since 1895, The New York Times has maintained a manual of style in several forms. The New York Times Manual of Style and Usage was published on the Times's intranet in 1999. The New York Times uses honorifics when referring to individuals. With the AP Stylebook's removal of honorifics in 2000 and The Wall Street Journal's omission of courtesy titles in May 2023, the Times is the only national newspaper that continues to use honorifics. According to former copy editor Merrill Perlman, The New York Times continues to use honorifics as a "sign of civility". The Times's use of courtesy titles led to an apocryphal rumor that the paper had referred to singer Meat Loaf as "Mr. Loaf". Several exceptions have been made; the former sports section and The New York Times Book Review do not use honorifics. A leaked memo following the killing of Osama bin Laden in May 2011 revealed that editors were given a last-minute instruction to omit the honorific from Osama bin Laden's name, consistent with deceased figures of historic significance, such as Adolf Hitler, Napoleon, and Vladimir Lenin. The New York Times uses academic and military titles for individuals prominently serving in that position. In 1986, the Times began to use Ms., and introduced the gender-neutral title Mx. in 2015. The New York Times uses initials when a subject has expressed a preference, such as Donald Trump. The New York Times maintains a strict but not absolute obscenity policy, including phrases. In a review of the Canadian hardcore punk band Fucked Up, music critic Kelefa Sanneh wrote that the band's name—entirely rendered in asterisks—would not be printed in the Times "unless an American president, or someone similar, says it by mistake"; The New York Times did not repeat then-vice president Dick Cheney's use of "fuck" against then-senator Patrick Leahy in 2004 or then-vice president Joe Biden's remarks that the passage of the Affordable Care Act in 2010 was a "big fucking deal". The Times's profanity policy has been tested by former president Donald Trump. The New York Times published Trump's Access Hollywood tape in October 2016, containing the words "fuck", "pussy", "bitch", and "tits", the first time the publication had published an expletive on its front page, and repeated an explicit phrase for fellatio stated by then-White House communications director Anthony Scaramucci in July 2017. The New York Times omitted Trump's use of the phrase "shithole countries" from its headline in favor of "vulgar language" in January 2018. The Times banned certain words, such as "bitch", "whore", and "sluts", from Wordle in 2022. Journalists for The New York Times do not write their own headlines, but rather copy editors who specifically write headlines. The Times's guidelines insist headline editors get to the main point of an article but avoid giving away endings, if present. Other guidelines include using slang "sparingly", avoiding tabloid headlines, not ending a line on a preposition, article, or adjective, and chiefly, not to pun. The New York Times Manual of Style and Usage states that wordplay, such as "Rubber Industry Bounces Back", is to be tested on a colleague as a canary is to be tested in a coal mine; "when no song bursts forth, start rewriting". The New York Times has amended headlines due to controversy. In 2019, following two back-to-back mass shootings in El Paso and Dayton, the Times used the headline, "Trump Urges Unity vs. Racism", to describe then-president Donald Trump's words after the shootings. After criticism from FiveThirtyEight founder Nate Silver, the headline was changed to, "Assailing Hate But Not Guns". Online, The New York Times's headlines do not face the same length restrictions as headlines that appear in print; print headlines must fit within a column, often six words. Additionally, headlines must "break" properly, containing a complete thought on each line without splitting up prepositions and adverbs. Writers may edit a headline to fit an article more aptly if further developments occur. The Times uses A/B testing for articles on the front page, placing two headlines against each other. At the end of the test, the headlines that receives more traffic is chosen. The alteration of a headline regarding intercepted Russian data used in the Mueller special counsel investigation was noted by Trump in a March 2017 interview with Time, in which he claimed that the headline used the word "wiretapped" in the print version of the paper on January 20, while the digital article on January 19 omitted the word. The headline was intentionally changed in the print version to use "wiretapped" in order to fit within the print guidelines. The nameplate of The New York Times has been unaltered since 1967. In creating the initial nameplate, Henry Jarvis Raymond took as his model the British newspaper The Times, which used a Blackletter style called Textura, popularized following the fall of the Western Roman Empire and regional variations of Alcuin's script, as well as a period. With the change to The New-York Times on September 14, 1857, the nameplate followed. Under George Jones, the terminals of the "N", "r", and "s" were intentionally exaggerated into swashes. The nameplate in the January 15, 1894, issue trimmed the terminals once more, smoothed the edges, and turned the stem supporting the "T" into an ornament. The hyphen was dropped on December 1, 1896, after Adolph Ochs purchased the paper. The descender of the "h" was shortened on December 30, 1914. The largest change to the nameplate was introduced on February 21, 1967, when type designer Ed Benguiat redesigned the logo, most prominently turning the arrow ornament into a diamond. Notoriously, the new logo dropped the period that had followed the word Times up until that point; one reader compared the omission of the period to "performing plastic surgery on Helen of Troy." Picture editor John Radosta worked with a New York University professor to determine that dropping the period saved the paper US$41.28 (equivalent to $398.59 in 2025). Print edition As of December 2023, The New York Times has printed sixty thousand issues, a statistic represented in the paper's masthead to the right of the volume number, the Times's years in publication written in Roman numerals. The volume and issues are separated by four dots representing the edition number of that issue; on the day of the 2000 presidential election, the Times was revised four separate times, necessitating the use of an em dash in place of an ellipsis. The em dash issue was printed hundreds times over before being replaced by the one-dot issue. Despite efforts by newsroom employees to recycle copies sent to The New York Times's office, several copies were kept, including one put on display at the Museum at The Times. From February 7, 1898, to December 31, 1999, the Times's issue number was incorrect by five hundred issues, an error suspected by The Atlantic to be the result of a careless front page type editor. The misreporting was noticed by news editor Aaron Donovan, who was calculating the number of issues in a spreadsheet and noticed the discrepancy. The New York Times celebrated fifty thousand issues on March 14, 1995, an observance that should have occurred on July 26, 1996. The New York Times has reduced the physical size of its print edition while retaining its broadsheet format. The New-York Daily Times debuted at 18 inches (460 mm) across. By the 1950s, the Times was being printed at 16 inches (410 mm) across. In 1953, an increase in paper costs to US$10 (equivalent to $120.34 in 2025) a ton increased newsprint costs to US$21.7 million (equivalent to $326,110,074.63 in 2025) On December 28, 1953, the pages were reduced to 15.5 inches (390 mm). On February 14, 1955, a further reduction to 15 inches (380 mm) occurred, followed by 14.5 and 13.5 inches (370 and 340 mm). On August 6, 2007, the largest cut occurred when the pages were reduced to 12 inches (300 mm),[k] a decision that other broadsheets had previously considered. Then-executive editor Bill Keller stated that a narrower paper would be more beneficial to the reader but acknowledged a net loss in article space of five percent. In 1985, The New York Times Company established a minority stake in a US$21.7 million (equivalent to $326,110,074.63 in 2025) newsprint plant in Clermont, Quebec through Donahue Malbaie. The company sold its equity interest in Donahue Malbaie in 2017. The New York Times often uses large, bolded headlines for major events. For the print version of the Times, these headlines are written by one copy editor, reviewed by two other copy editors, approved by the masthead editors, and polished by other print editors. The process is completed before 8 p.m., but it may be repeated if further development occur, as did take place during the 2020 presidential election. On the day Joe Biden was declared the winner, The New York Times utilized a "hammer headline" reading, "Biden Beats Trump", in all caps and bolded. A dozen journalists discussed several potential headlines, such as "It's Biden" or "Biden's Moment", and prepared for a Donald Trump victory, in which they would use "Trump Prevails". During Trump's first impeachment, the Times drafted the hammer headline, "Trump Impeached". The New York Times altered the ligatures between the E and the A, as not doing so would leave a noticeable gap due to the stem of the A sloping away from the E. The Times reused the tight kerning for "Biden Beats Trump" and Trump's second impeachment, which simply read, "Impeached". In cases where two major events occur on the same day or immediately after each other, The New York Times has used a "paddle wheel" headline, where both headlines are used but split by a line. The term dates back to August 8, 1959, when it was revealed that the United States was monitoring Soviet missile firings and when Explorer 6—shaped like a paddle wheel—launched. Since then, the paddle wheel has been used several times, including on January 21, 1981, when Ronald Reagan was sworn in minutes before Iran released fifty-two American hostages, ending the Iran hostage crisis. At the time, most newspapers favored the end of the hostage crisis, but the Times placed the inauguration above the crisis. Other occasions in which the paddle wheel has been used include on July 26, 2000, when the 2000 Camp David Summit ended without an agreement and when Bush announced that Dick Cheney would be his running mate, and on June 24, 2016, when the United Kingdom European Union membership referendum passed, beginning Brexit, and when the Supreme Court deadlocked in United States v. Texas. The New York Times has run editorials from its editorial board on the front page twice. On June 13, 1920, the Times ran an editorial opposing Warren G. Harding, who was nominated during that year's Republican Party presidential primaries. Amid growing acceptance to run editorials on the front pages from publications such as the Detroit Free Press, The Patriot-News, The Arizona Republic, and The Indianapolis Star, The New York Times ran an editorial on its front page on December 5, 2015, following a terrorist attack in San Bernardino, California, in which fourteen people were killed. The editorial advocates for the prohibition of "slightly modified combat rifles" used in the San Bernardino shooting and "certain kinds of ammunition". Conservative figures, including Texas senator Ted Cruz, The Weekly Standard editor Bill Kristol, Fox & Friends co-anchor Steve Doocy, and then-New Jersey governor Chris Christie criticized the Times. Talk radio host Erick Erickson acquired an issue of The New York Times to fire several rounds into the paper, posting a picture online. Since 1997, The New York Times's primary distribution center is located in College Point, Queens. The facility is 300,000 ft2 (28,000 m2) and employs 170 people as of 2017. The College Point distribution center prints 300,000 to 800,000 newspapers daily. On most occasions, presses start before 11 p.m. and finish before 3 a.m. A robotic crane grabs a roll of newsprint and several rollers ensure ink can be printed on paper. The final newspapers are wrapped in plastic and shipped out. As of 2018, the College Point facility accounted for 41 percent of production. Other copies are printed at 26 other publications, such as The Atlanta Journal-Constitution, The Dallas Morning News, The Santa Fe New Mexican, and the Courier Journal. With the decline of newspapers, particularly regional publications, the Times must travel further; for example, newspapers for Hawaii are flown from San Francisco on United Airlines, and Sunday papers are flown from Los Angeles on Hawaiian Airlines. Computer glitches, mechanical issues, and weather phenomena affect circulation but do not stop the paper from reaching customers. The College Point facility prints over two dozen other papers, including The Wall Street Journal and USA Today. The New York Times has halted its printing process several times to account for major developments. The first printing stoppage occurred on March 31, 1968, when then-president Lyndon B. Johnson announced that he would not seek a second term. Other press stoppages include May 19, 1994, for the death of former first lady Jacqueline Kennedy Onassis, and July 17, 1996, for Trans World Airlines Flight 800. The 2000 presidential election necessitated two press stoppages. Al Gore appeared to concede on November 8, forcing then-executive editor Joseph Lelyveld to stop the Times's presses to print a new headline, "Bush Appears to Defeat Gore", with a story that stated George W. Bush was elected president. However, Gore held off his concession speech over doubts over Florida. Lelyveld reran the headline, "Bush and Gore Vie for an Edge". Since 2000, three printing stoppages have been issued for the death of William Rehnquist on September 3, 2005, for the killing of Osama bin Laden on May 1, 2011, and for the passage of the Marriage Equality Act in the New York State Assembly and subsequent signage by then-governor Andrew Cuomo on June 24, 2011. Online platforms The New York Times website is hosted at nytimes.com. It has undergone several major redesigns and infrastructure developments since its debut. In April 2006, The New York Times redesigned its website with an emphasis on multimedia. In preparation for Super Tuesday in February 2008, the Times developed a live election system using the Associated Press's File Transfer Protocol (FTP) service and a Ruby on Rails application; nytimes.com experienced its largest traffic on Super Tuesday and the day after. The NYTimes application debuted with the introduction of the App Store on July 10, 2008. Engadget's Scott McNulty wrote critically of the app, negatively comparing it to The New York Times's mobile website. An iPad version with select articles was released on April 3, 2010, with the release of the first-generation iPad. In October, The New York Times expanded NYT Editors' Choice to include the paper's full articles. NYT for iPad was free until 2011. The Times applications on iPhone and iPad began offering in-app subscriptions in July 2011. The Times released a web application for iPad—featuring a format summarizing trending headlines on Twitter—and a Windows 8 application in October 2012. Efforts to ensure profitability through an online magazine and a "Need to Know" subscription emerged in Adweek in July 2013. In March 2014, The New York Times announced three applications—NYT Now, an application that offers pertinent news in a blog format, and two unnamed applications, later known as NYT Opinion and NYT Cooking—to diversify its product laterals. The Daily is the modern front page of The New York Times. The New York Times manages several podcasts, including multiple podcasts with Serial Productions. The Times's longest-running podcast is The Book Review Podcast, debuting as Inside The New York Times Book Review in April 2006. The New York Times's defining podcast is The Daily, a daily news podcast hosted by Michael Barbaro which debuted on February 1, 2017. Between March 2022 and March 2025, the approximately 30 minute programme was co-hosted with Sabrina Tavernise. Beginning in April 2025 Barbaro was joined by two new regular co-hosts, Natalie Kitroeff and Rachel Abrams. The Interview was launched in 2024 and is hosted weekly by David Marchese and Lulu Garcia-Navarro. Episodes typically last 40 to 50 minutes. Condensed versions of the interviews are published simultaneously in The New York Times Magazine. Guests have included politicians, actors, influential experts, media figures and high-profile writers. In October 2021, The New York Times began testing "New York Times Audio", an application featuring podcasts from the Times, audio versions of articles—including from other publications through Audm, and archives from This American Life. The application debuted in May 2023 exclusively on iOS for Times subscribers. New York Times Audio includes exclusive podcasts such as The Headlines, a daily news recap, and Shorts, short audio stories under ten minutes. In addition, a "Reporter Reads" section features Times journalists reading their articles and providing commentary. The New York Times has used video games as part of its journalistic efforts, among the first publications to do so, contributing to an increase in Internet traffic; the publication has also developed its own video games. In 2014, The New York Times Magazine introduced Spelling Bee, a word game in which players guess words from a set of letters in a honeycomb and are awarded points for the length of the word and receive extra points if the word is a pangram. The game was proposed by Will Shortz, created by Frank Longo, and has been maintained by Sam Ezersky. In May 2018, Spelling Bee was published on nytimes.com, furthering its popularity. In February 2019, the Times introduced Letter Boxed, in which players form words from letters placed on the edges of a square box, followed in June 2019 by Tiles, a matching game in which players form sequences of tile pairings, and Vertex, in which players connect vertices to assemble an image. In July 2023, The New York Times introduced Connections, in which players identify groups of words that are connected by a common property. In April, the Times introduced Digits, a game that required using operations on different values to reach a set number; Digits was shut down in August. In March 2024, The New York Times released Strands, a themed word search. In January 2022, The New York Times Company acquired Wordle, a word game developed by Josh Wardle in 2021, at a valuation in the "low-seven figures". The acquisition was proposed by David Perpich, a member of the Sulzberger family who proposed the purchase to Knight over Slack after reading about the game. The Washington Post purportedly considered acquiring Wordle, according to Vanity Fair. At the 2022 Game Developers Conference, Wardle stated that he was overwhelmed by the volume of Wordle facsimiles and overzealous monetization practices in other games. Concerns over The New York Times monetizing Wordle by implementing a paywall mounted; Wordle is a client-side browser game and can be played offline by downloading its webpage. Wordle moved to the Times's servers and website in February. The game was added to the NYT Games application in August, necessitating it be rewritten in the JavaScript library React. In November, The New York Times announced that Tracy Bennett would be the Wordle's editor. Other publications The New York Times Magazine and The Boston Globe Magazine are the only weekly Sunday magazines following The Washington Post Magazine's cancellation in December 2022. In February 2016, The New York Times introduced a Spanish website, The New York Times en Español. The website, intended to be read on mobile devices, would contain translated articles from the Times and reporting from journalists based in Mexico City. The Times en Español's style editor is Paulina Chavira, who has advocated for pluralistic Spanish to accommodate the variety of nationalities in the newsroom's journalists and wrote a stylebook for The New York Times en Español. Articles the Times intends to publish in Spanish are sent to a translation agency and adapted for Spanish writing conventions; the present progressive tense may be used for forthcoming events in English, but other tenses are preferable in Spanish. The Times en Español consults the Real Academia Española and Fundéu and frequently modifies the use of diacritics—such as using an acute accent for the Cártel de Sinaloa but not the Cartel de Medellín—and using the gender-neutral pronoun elle. Headlines in The New York Times en Español are not capitalized. The Times en Español publishes El Times, a newsletter led by Elda Cantú intended for all Spanish speakers. In September 2019, The New York Times ended The New York Times en Español's separate operations. A study published in The Translator in 2023 found that the Times en Español engaged in tabloidization. In June 2012, The New York Times introduced a Chinese website, 纽约时报中文, in response to Chinese editions created by The Wall Street Journal and the Financial Times. Conscious to censorship, the Times established servers outside of China and affirmed that the website would uphold the paper's journalistic standards; the government of China had previously blocked articles from nytimes.com through the Great Firewall, and the website was blocked in China until August 2001 after then-general secretary Jiang Zemin met with journalists from The New York Times. Then-foreign editor Joseph Kahn assisted in the establishment of cn.nytimes.com, an effort that contributed to his appointment as executive editor in April 2022. In October 2012, 纽约时报中文 published an article detailing the wealth of then-premier Wen Jiabao's family. In response, the government of China blocked access to nytimes.com and cn.nytimes.com and references to the Times and Wen were censored on microblogging service Sina Weibo. In March 2015, a mirror of 纽约时报中文 and the website for GreatFire were the targets for a government-sanctioned distributed denial of service attack on GitHub in March 2015, disabling access to the service for several days. Chinese authorities requested the removal of The New York Times's news applications from the App Store in December 2016. Awards and recognition As of 2023, The New York Times has received 137 Pulitzer Prizes, the most of any publication. The New York Times is considered a newspaper of record in the United States.[l] The Times is the largest metropolitan newspaper in the United States; as of 2022, The New York Times is the second-largest newspaper by print circulation in the United States behind The Wall Street Journal. A study published in Science, Technology, & Human Values in 2013 found that The New York Times received more citations in academic journals than the American Sociological Review, Research Policy, or the Harvard Law Review. With sixteen million unique records, the Times is the third-most referenced source in Common Crawl, a collection of online material used in datasets such as GPT-3, behind Wikipedia and a United States patent database. The New Yorker's Max Norman wrote in March 2023 that the Times has shaped mainstream English usage. In a January 2018 article for The Washington Post, Margaret Sullivan stated that The New York Times affects the "whole media and political ecosystem". The New York Times's nascent success has led to concerns over media consolidation, particularly amid the decline of newspapers. In 2006, economists Lisa George and Joel Waldfogel examined the consequences of the Times's national distribution strategy and audience with circulation of local newspapers, finding that local circulation decreased among college-educated readers. The effect of The New York Times in this manner was observed in The Forum of Fargo-Moorhead, the newspaper of record for Fargo, North Dakota. Axios founder Jim VandeHei opined that the Times is "going to basically be a monopoly" in an opinion piece written by then-media columnist and former BuzzFeed News editor-in-chief Ben Smith; in the article, Smith cites the strength of The New York Times's journalistic workforce, broadening content, and the expropriation of Gawker editor-in-chief Choire Sicha, Recode editor-in-chief Kara Swisher, and Quartz editor-in-chief Kevin Delaney. Smith compared the Times to the New York Yankees during their 1927 season containing Murderers' Row. Controversies Since 2003, studies analyzing coverage of the Israeli–Palestinian conflict in the New York Times have demonstrated a bias against Palestinians and in favor of Israel.[m] The New York Times has received criticism for its coverage of the Gaza war and genocide. In April 2024, The Intercept reported that a November 2023 internal memorandum by Susan Wessling and Philip Pan instructed journalists to reduce using the terms "genocide" and "ethnic cleansing" and to avoid using the phrase "occupied territory" in the context of Palestinian land, "Palestine" except in rare circumstances, and the term "refugee camps" to describe areas of Gaza despite recognition from the United Nations. A spokesperson from the Times stated that issuing guidance was standard practice. An analysis by The Intercept noted that The New York Times described Israeli deaths as a massacre nearly sixty times, but had only described Palestinian deaths as a massacre once. Writers and editors have left the newspaper due to its coverage of events in Gaza, including Jazmine Hughes and Jamie Lauren Keiles. In December 2023, The New York Times published an investigation titled "'Screams Without Words': How Hamas Weaponized Sexual Violence on Oct. 7", alleging that Hamas weaponized sexual and gender-based violence during its armed incursion on Israel. The investigation was the subject of an article from The Intercept questioning the journalistic acumen of Anat Schwartz, a filmmaker involved in the inquiry who had no prior reporting experience and agreed with a post stating Israel should "violate any norm, on the way to victory", doubting the veracity of the opening claim that Gal Abdush was raped in a timespan disputed by her family, and alleging that the Times was pressured by the Committee for Accuracy in Middle East Reporting in America. The New York Times initiated an inquiry into the leaking of confidential information about the report to other outlets, which received criticism from NewsGuild of New York president Susan DeCarava for purported racial targeting; the Times's investigation was inconclusive, but found gaps in the way proprietary journalistic material is handled. The New York Times Building has been a site of protest action during the Gaza war and genocide, including a November 2023 sit-in demanding that The Times's editorial board publicly call for a ceasefire and accusing the media company of "complicity in laundering genocide", a February 29, 2024, protest and press conference following the release of The Intercept's critical investigation into the NYT "Screams Without Words" exposé, and an action on July 30, 2025, in which protesters spray-painted "NYT Lies, Gaza dies" on the building's glass facade. In addition, protesters blocked The New York Times's distribution center March 14, 2024 and executive editor Joseph Kahn's residence was splattered with red paint on August 25, 2025. The collective Writers Against the War on Gaza, which publishes the mock publication The New York War Crimes, has been associated with protests against The New York Times. On October 27, 2025, 300 writers—including scholars, journalists, and public intellectuals—pledged to boycott The New York Times and withhold contributions to the paper in protest of what they describe as its complicity in the Gaza genocide, demanding 1) a review of anti-Palestinian bias in the newsroom, 2) a retraction of "Screams Without Words", and 3) a call from the editorial board for a US arms embargo on Israel. Among the initial signatories, about 150 had previously contributed to the Times. The New York Times has received criticism regarding its coverage of transgender people. When it published an opinion piece by Weill Cornell Medicine professor Richard A. Friedman called "How Changeable Is Gender?" in August 2015, Vox's German Lopez criticized Friedman as suggesting that parents and doctors might be right in letting children suffer from severe dysphoria in case something changes down the line, and as implying that conversion therapy may work for transgender children. In February 2023, nearly one thousand current and former Times writers and contributors wrote an open letter addressed to standards editor Philip B. Corbett, criticizing the paper's coverage of transgender, non⁠-⁠binary, and gender-nonconforming people; some of the Times's articles have been cited in state legislatures attempting to justify criminalizing gender-affirming care. Contributors wrote in the open letter that "the Times has in recent years treated gender diversity with an eerily familiar mix of pseudoscience and euphemistic, charged language, while publishing reporting on trans children that omits relevant information about its sources."[n] According to former Times journalist Billie Jean Sweeney, a push for writers to challenge “every aspect of being trans”, ranging from gender-inclusive language to access to medical care, came from the top in 2022 after leadership was handed over to A. G. Sulzberger, Joe Kahn, and Carolyn Ryan; as part of an effort to win good will with the Trump campaign without incurring backlash from the general populace. The Times has continually denied any bias in its reporting, insisting that its coverage of “fiercely contested medical and legal debates” is fair and balanced, and that it would not tolerate journalists protesting its transgender coverage. Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Channel_13_(Israel)] | [TOKENS: 756]
Contents Channel 13 (Israel) Channel 13 (Hebrew: ערוץ 13) is an Israeli free-to-air television channel operated by Reshet Media. It was launched on 1 November 2017 as one of two replacements of the outgoing Channel 2. History Israel's Channel 2 was operated by the Second Authority for Television and Radio, but was programmed by two rotating companies, Keshet Media Group and Reshet. As part of a larger series of reforms to Israel's broadcast system to increase diversity and competition, Channel 2 was shut down, and both broadcasters were granted their own, separate channels. Reshet 13 and Keshet 12 both officially launched on 1 November 2017 as standalone channels. The Israel Television News Company continued to provide news programmes for both channels; the main primetime bulletin is simulcast by both channels, while other programs are divided among the two channels. In June 2018, due to financial issues caused by the 2017 Channel 2 split, RGE (owner of Channel 10) filed a merger with this channel's parent company Reshet. In October 2018, Reshet announced that the merger had been cancelled. Reshet's owners have since reconsidered the merger, and after a long battle with the Second Authority, the merger was approved, and was scheduled for January 16, 2019. The merger saw certain programmes previously shown on Channel 10 moved to Reshet 13, which subsequently changed its name to Channel 13, with the new slogan "everything connects". Ownership and management When the channel began broadcasting, on 1 November 2017, it was controlled by the owners of Reshet Media - Ofer Family (51%), Strauss Investments (16%) and Endemol (33%), and the Channel’s CEO was Avi Zvi. Following the Reshet-Channel 10 merger, the shareholding composition for the channel changed as of 16 January 2019, as follows: Sir Len Blavatnik held approximately 52%, Udi Recanati (through Naftali Investments) held approximately 9% and Nadav Topolsky held approximately 7% of the shares. Jointly the three held 68% as RGE. Of the Reshet shareholders, Udi Angel held approximately 16%, Endemol held approximately 11% and Michael Strauss held approximately 5%. In June 2020, negotiations began with other parties in an attempt to form a partnership which would absorb Blavatnik’s share of the company. In early 2021, discussions began with Discovery Group and an agreement was reached and approved in July 2021. The channel’s new shareholding structure is: ~52% Sir Len Blavatnik’s Access, 21% Discovery, 13.5% Nadav Topolsky, 8% Udi Angel and 5.5% Strauss Family. In practice, Discovery replaced Endemol and Recanati, which held shares in the Channel. The current CEO is Yoram Altman Eldar and the CEO of the News Company is Aviram Elad. Programming Being one of two direct replacements of Channel 2, Reshet 13 has broadcast news programmes produced by Israel Television News Company (which produced HaHadashot 2 for Channel 2), a company Reshet jointly owned with Keshet. Following the merger between Reshet and Channel 10, Reshet sold its stake in the News Company to Keshet. Reshet 13 took resources and programmes from Channel 10's news production company, which then changed on-air branding to HaHadashot 13. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-142] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://www.ynet.co.il/economy/category/5363] | [TOKENS: 232]
כלכלה>צרכנות> בשל זיהום: תמרה אוספת שלגונים וגלידות יפאורה מצטרפת לטרנד העולמי: תשווק מיצי ספרינג עם אלכוהול מקס השיקה פלטפורמה שבה ניתן להזמין חופשה בחו"ל ולצבור כסף לחופשה הבאה HOT משדרגת את הנתב ל-AI: לחלק מהלקוחות זאת התייקרות
========================================
[SOURCE: https://www.mako.co.il/tv-joinus] | [TOKENS: 1309]
תכניות קשת קשת מחפשת תרימו את המסך? ההרשמה ל"הכוכב הבא" נפתחה!חולמים לעמוד על הבמה ולהדהים את השופטים? היכנסו והירשמו עכשיו עכשיו זה רשמי - ההרשמה לנינג'ה ישראל נפתחהחושבים שיש לכם את מה שצריך? הירשמו עכשיו! מוכנים לרוץ? ההרשמה ל"המירוץ למיליון" נפתחההתאמנתם, אתם חדים וזוג מנצח? היכנסו והירשמו יאללה למטבח: ההרשמה לעונה החדשה של "מאסטר שף" נפתחהחושבים שיש לכם את מה שצריך כדי להיות "המאסטר שף של ישראל"? קדימה, הירשמו איזה כיף! ההרשמה ל"חתונה ממבט ראשון" נפתחהאם גם אתם רוצים למצוא אהבה, היכנסו והירשמו עכשיו נכנסים למטבח! ההרשמה ל"המטבח המנצח MKR" נפתחהאלופים בבישול ובאירוח? זה הזמן להוכיח את זה ההרשמה לעונה החדשה של "הכרישים" הסתיימה לא הספקתם? אל דאגה, אולי תציגו את הרעיונות שלכם בפעם הבאה אולי בעתיד: ההרשמה לפרויקט של עידן עמדי - נסגרההתוכנית תעסוק בהיכרות עמוקה של כל גווני החברה הישראלית אולי בפעם הבאה: ההרשמה ל"כוכבים בריבוע" הסתיימהאבל אין מה לדאוג - אפשר לצעוק תשובות גם מהספה בבית ההרשמה ל"בייבי בום" הסתיימהנתראה בקרוב בתוכנית הכי מרגשת על המסך אוף, ההרשמה לתוכנית הזוגיות בסבב ב' הסתיימהתוכנית חדשה, איכותית, אמיצה ומרגשת יוצאת לדרך ובקורב אצלכם במסך הסתיימה ההרשמה ל"גיבורי השנה 2023"מי יהיו הישראלים והישראליות שישתתפו במשדר החגיגי? ההרשמה למשדר "הכוכבים של צה"ל" עם עידן רייכל נסגרהמצדיעים לפצועי ופצועות צה"ל ולמערך התומך והמלווה אופס, ההרשמה לאודישנים של "הקינוח המושלם" נסגרהאבל זה לא נורא בכלל, הישארו מעודכנים לעונה הבאה
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_Israel#Second_Temple_period] | [TOKENS: 14912]
Contents History of Israel The history of Israel covers the Southern Levant region also known as Canaan, Palestine, or the Holy Land, which is the location of Israel and Palestine. From prehistory, as part of the Levantine corridor, the area witnessed waves of early humans from Africa, then the emergence of Natufian culture c. 10,000 BCE. The region entered the Bronze Age c. 2,000 BCE with the development of Canaanite civilization. In the Iron Age, the kingdoms of Israel and Judah were established, entities central to the origins of the Abrahamic religions. This has given rise to Judaism, Samaritanism, Christianity, Islam, Druzism, Baha'ism. The Land of Israel has seen many conflicts, been controlled by various polities, and hosted various ethnic groups. In the following centuries, the Assyrian, Babylonian, Achaemenid, and Macedonian empires conquered the region. Ptolemies and Seleucids vied for control during the Hellenistic period. Through the Hasmonean dynasty, the Jews maintained independence for a century before incorporation into the Roman Republic. As a result of the Jewish–Roman wars in the 1st and 2nd centuries CE, many Jews were killed, or sold into slavery. Following the advent of Christianity, demographics shifted towards newfound Christians, who replaced Jews as the majority by the 4th century. In the 7th century, Byzantine Christian rule over Israel was superseded in the Muslim conquest of the Levant by the Rashidun Caliphate, to later be ruled by the Umayyad, Abbasid, and Fatimid caliphates, before being conquered by the Seljuks in the 1070s. Throughout the 12th and 13th centuries, the Land of Israel saw wars between Christians and Muslims as part of the Crusades, with the Kingdom of Jerusalem overrun by Saladin's Ayyubids in the 12th century. The Crusaders hung on to decreasing territories for another century. In the 13th century, the Land of Israel became subject to Mongol conquest, though this was stopped by the Mamluk Sultanate, under whose rule it remained until the 16th century. The Mamluks were defeated by the Ottoman Empire, and the region became an Ottoman province until the early 20th century. The 19th century saw the rise of a Jewish nationalist movement in Europe known as Zionism; aliyah, Jewish immigration to Israel from the diaspora, increased. During World War I, the Sinai and Palestine campaign of the Allies led to the partition of the Ottoman Empire. Britain was granted control of the region by a League of Nations mandate, known as Mandatory Palestine. The British committed to the creation of a Jewish homeland in the 1917 Balfour Declaration. Palestinian Arabs sought to prevent Jewish immigration, and tensions grew during British administration. In 1947, the UN voted for the partition of Mandate Palestine and creation of a Jewish and Arab state. The Jews accepted the plan, while the Arabs rejected it. A civil war ensued, won by the Jews. In May 1948, the Israeli Declaration of Independence sparked the 1948 War in which Israel repelled the armies of the neighbouring states. It resulted in the 1948 Palestinian expulsion and flight and led to Jewish emigration from other parts of the Middle East. About 40% of the global Jewish population resides in Israel. In 1979, the Egypt–Israel peace treaty was signed. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite a long-running Israeli–Palestinian peace process, the conflict continues. Prehistory The oldest evidence of early humans in the territory of modern Israel, dating to 1.5 million years ago, was found in Ubeidiya near the Sea of Galilee. Flint tool artefacts have been discovered at Yiron, the oldest stone tools found anywhere outside Africa.[dubious – discuss] The Daughters of Jacob Bridge over the Jordan River provides evidence of the control of fire by early humans around 780,000 years ago, one of the oldest known examples. In the Mount Carmel area at el-Tabun, and Es Skhul, Neanderthal and early modern human remains were found, showing the longest stratigraphic record in the region, spanning 600,000 years of human activity, from the Lower Paleolithic to the present day, representing roughly a million years of human evolution. Other significant Paleolithic sites include Qesem cave. A 200,000-year-old fossil from Misliya Cave is the second-oldest evidence of anatomically modern humans found outside Africa. Other notable finds include the Skhul and Qafzeh hominins, as well as Manot 1. Around 10th millennium BCE, the Natufian culture existed in the area. The beginning of agriculture in the region during the Neolithic Revolution is evidenced by sites such as Nahal Oren and Gesher. Here is one of the more common periodisations. Bronze Age Canaan The Canaanites are archaeologically attested in the Middle Bronze Age (2100–1550 BCE). There were probably independent or semi-independent city-states. Cities were often surrounded by massive earthworks, resulting in the archaeological mounds, or 'tells' common in the region today. In the late Middle Bronze Age, the Nile Delta in Egypt was settled by Canaanites who maintained close connections with Canaan. During that period, the Hyksos, dynasties of Canaanite/Asiatic origin, ruled much of Lower Egypt before being overthrown in the 16th century BCE. During the Late Bronze Age (1550–1200 BCE), there were Canaanite vassal states paying tribute to the New Kingdom of Egypt, which governed from Gaza. In 1457 BCE, Egyptian forces under the command of Pharaoh Thutmose III defeated a rebellious coalition of Canaanite vassal states led by Kadesh's king at the Battle of Megiddo. In the Late Bronze Age there was a period of civilizational collapse in the Middle East, Canaan fell into chaos, and Egyptian control ended. There is evidence that urban centers such as Hazor, Beit She'an, Megiddo, Ekron, Isdud and Ascalon were damaged or destroyed. Two groups appear at this time, and are associated with the transition to the Iron Age (they used iron weapons/tools which were better than earlier bronze): the Sea Peoples, particularly the Philistines, who migrated from the Aegean world and settled on the southern coast, and the Israelites, whose settlements dotted the highlands. Some 2nd millennium inscriptions about the semi-nomadic Habiru people are believed to be connected to the Hebrews, who were generally synonymous with the Biblical Israelites. Many scholars regard this connection to be plausible since the two ethnonyms have similar etymologies, although others argue that Habiru refers to a social class found in every Near Eastern society, including Hebrew societies. Ancient Israel and Judah: Iron Age to Babylonian period The earliest recorded evidence of a people by the name of Israel (as ysrỉꜣr) occurs in the Egyptian Merneptah Stele, erected for Pharaoh Merneptah c. 1209 BCE. Archeological evidence indicates that during the early Iron Age I, hundreds of small villages were established on the highlands of Canaan on both sides of the Jordan River, primarily in Samaria, north of Jerusalem. These villages had populations of up to 400, were largely self-sufficient and lived from herding, grain cultivation, and growing vines and olives with some economic interchange. The pottery was plain and undecorated. Writing was known and available for recording, even in small sites. William G. Dever sees this "Israel" in the central highlands as a cultural and probably political entity, more an ethnic group rather than an organized state. Modern scholars believe that the Israelites and their culture branched out of the Canaanite peoples and their cultures through the development of a distinct monolatristic—and later monotheistic—religion centred on a national god Yahweh. According to McNutt, "It is probably safe to assume that sometime during Iron Age I a population began to identify itself as 'Israelite'", differentiating itself from the Canaanites through such markers as the prohibition of intermarriage, an emphasis on family history and genealogy, and religion. Philistine cooking tools and the prevalence of pork in their diets, and locally made Mycenaean pottery—which later evolved into bichrome Philistine pottery—all support their foreign origin. Their cities were large and elaborate, which—together with the findings—point to a complex, hierarchical society. Israel Finkelstein believes that the oldest Abraham traditions originated in the Iron Age, which focus on the themes of land and offspring and possibly, his altars in Hebron. Abraham's Mesopotamian heritage is not discussed. In the 10th century BCE, the Israelite kingdoms of Judah and Israel emerged. The Hebrew Bible states that these were preceded by a single kingdom ruled by Saul, David and Solomon, who is said to have built the First Temple. Archaeologists have debated whether the united monarchy ever existed,[Notes 1] with those in favor of such a polity existing further divided between maximalists who support the Biblical accounts, and minimalists who argue that any such polity was likely smaller than suggested. Historians and archaeologists agree that the northern Kingdom of Israel existed by ca. 900 BCE and the Kingdom of Judah existed by ca. 850 BCE. The Kingdom of Israel was the more prosperous of the two kingdoms and soon developed into a regional power; during the days of the Omride dynasty, it controlled Samaria, Galilee, the upper Jordan Valley, the Sharon and large parts of the Transjordan. Samaria, the capital, was home to one of the largest Iron Age structures in the Levant. The Kingdom of Israel's capital moved between Shechem, Penuel and Tirzah before Omri settled it in Samaria, and the royal succession was often settled by a military coup d'état. The Kingdom of Judah was smaller but more stable; the Davidic dynasty ruled the kingdom for the four centuries of its existence, with the capital always in Jerusalem, controlling the Judaean Mountains, most of the Shephelah and the Beersheba valley in the northern Negev. In 854 BCE, according to the Kurkh Monoliths, an alliance between Ahab of Israel and Ben Hadad II of Aram-Damascus managed to repulse the incursions of the Assyrians, with a victory at the Battle of Qarqar. Another important discovery of the period is the Mesha Stele, a Moabite stele found in Dhiban when Emir Sattam Al-Fayez led Henry Tristram to it as they toured the lands of the vassals of the Bani Sakher. The stele is now in the Louvre. In the stele, Mesha, king of Moab, tells how Chemosh, the god of Moab, had been angry with his people and had allowed them to be subjugated to the Kingdom of Israel, but at length, Chemosh returned and assisted Mesha to throw off the yoke of Israel and restore the lands of Moab. It refers to Omri, king of Israel, to the god Yahweh, and may contain another early reference to the House of David. The Kingdom of Israel fell to the Assyrians following a long siege of the capital Samaria around 720 BCE. The records of Sargon II indicate that he captured Samaria and deported 27,290 inhabitants to Mesopotamia. It is likely that Shalmaneser captured the city since both the Babylonian Chronicles and the Hebrew Bible viewed the fall of Israel as the signature event of his reign. The Assyrian deportations became the basis for the Jewish idea of the Ten Lost Tribes. Foreign groups were settled by the Assyrians in the territories of the fallen kingdom. The Samaritans claim to be descended from Israelites of ancient Samaria who were not expelled by the Assyrians. It is believed that refugees from the destruction of Israel moved to Judah, massively expanding Jerusalem and leading to construction of the Siloam Tunnel during the rule of King Hezekiah (ruled 715–686 BCE). The Siloam inscription, a plaque written in Hebrew left by the construction team, was discovered in the tunnel in 1880s, and is today held by the Istanbul Archaeology Museum. During Hezekiah's rule, Sennacherib, the son of Sargon, attempted but failed to capture Judah. Assyrian records say that Sennacherib levelled 46 walled cities and besieged Jerusalem, leaving after receiving extensive tribute. Sennacherib erected the Lachish reliefs in Nineveh to commemorate a second victory at Lachish. The writings of four different "prophets" are believed to date from this period: Hosea and Amos in Israel and Micah and Isaiah of Judah. These men were mostly social critics who warned of the Assyrian threat and acted as religious spokesmen. They exercised some form of free speech and may have played a significant social and political role in Israel and Judah. They urged rulers and the general populace to adhere to god-conscious ethical ideals, seeing the Assyrian invasions as a divine punishment of the collective resulting from ethical failures. Under King Josiah (ruler from 641 to 619 BCE), the Book of Deuteronomy was either rediscovered or written. The Book of Joshua and the accounts of the kingship of David and Solomon in the Book of Kings are believed to have the same author. The books are known as Deuteronomist and considered to be a key step in the emergence of monotheism in Judah. They emerged at a time that Assyria was weakened by the emergence of Babylon and may be a committing to text of pre-writing verbal traditions. During the late 7th century BCE, Judah became a vassal state of the Neo-Babylonian Empire. In 601 BCE, Jehoiakim of Judah allied with Babylon's principal rival, Egypt, despite the strong remonstrances of the prophet Jeremiah. As a punishment, the Babylonians besieged Jerusalem in 597 BCE, and the city surrendered. The defeat was recorded by the Babylonians. Nebuchadnezzar pillaged Jerusalem and deported king Jechoiachin (Jeconiah), along with other prominent citizens, to Babylon; Zedekiah, his uncle, was installed as king. A few years later, Zedekiah launched another revolt against Babylon, and an army was sent to conquer Jerusalem. In 587 or 586 BCE, King Nebuchadnezzar II of Babylon conquered Jerusalem, destroyed the First Temple and razed the city. The Kingdom of Judah was abolished, and many of its citizens were exiled to Babylon. The former territory of Judah became a Babylonian province called Yehud with its center in Mizpah, north of the destroyed Jerusalem. Tablets that describe King Jehoiachin's rations were found in the ruins of Babylon. He was eventually released by the Babylonians. According to both the Bible and the Talmud, the Davidic dynasty continued as head of Babylonian Jewry, called the "Rosh Galut" (exilarch or head of exile). Arab and Jewish sources show that the Rosh Galut continued to exist for another 1,500 years in what is now Iraq, ending in the eleventh century. Second Temple period In 538 BCE, Cyrus the Great of the Achaemenid Empire conquered Babylon and took over its empire. Cyrus issued a proclamation granting religious freedom to all peoples subjugated by the Babylonians (see the Cyrus Cylinder). According to the Bible, Jewish exiles in Babylon, including 50,000 Judeans led by Zerubabel, returned to Judah to rebuild the Temple in Jerusalem. The Second Temple was subsequently completed c. 515 BCE. A second group of 5,000, led by Ezra and Nehemiah, returned to Judah in 456 BCE. The first was empowered by the Persian king to enforce religious rules, the second had the status of governor and a royal mission to restore the walls of the city. The country remained a province of the Achaemenid empire called Yehud until 332 BCE. The final text of the Torah is thought to have been written during the Persian period (probably 450–350 BCE). The text was formed by editing and unifying earlier texts. The returning Israelites adopted an Aramaic script (also known as the Ashuri alphabet), which they brought back from Babylon; this is the current Hebrew script. The Hebrew calendar closely resembles the Babylonian calendar and probably dates from this period. The Bible describes tension between the returnees, the elite of the First Temple period, and those who had remained in Judah. It is possible that the returnees, supported by the Persian monarchy, became large landholders at the expense of the people who had remained to work the land in Judah, whose opposition to the Second Temple would have reflected a fear that exclusion from the cult would deprive them of land rights. Judah had become in practice a theocracy, ruled by hereditary High Priests and a Persian-appointed governor, frequently Jewish, charged with keeping order and seeing that tribute was paid. A Judean military garrison was placed by the Persians on Elephantine Island near Aswan in Egypt. In the early 20th century, 175 papyrus documents recording activity in this community were discovered, including the "Passover Papyrus", a letter instructing the garrison on how to correctly conduct the Passover feast. In 332 BCE, Alexander the Great of Macedon conquered the region as part of his campaign against the Achaemenid Empire. After his death in 322 BCE, his generals divided the empire and Judea became a frontier region between the Seleucid Empire and Ptolemaic Kingdom in Egypt. Following a century of Ptolemaic rule, Judea was conquered by the Seleucid Empire in 200 BCE at the battle of Panium. Hellenistic rulers generally respected Jewish culture and protected Jewish institutions. Judea was ruled by the hereditary office of the High Priest of Israel as a Hellenistic vassal. Nevertheless, the region underwent a process of Hellenization, which heightened tensions between Greeks, Hellenized Jews, and observant Jews. These tensions escalated into clashes involving a power struggle for the position of high priest and the character of the holy city of Jerusalem. When Antiochus IV Epiphanes consecrated the temple, forbade Jewish practices, and forcibly imposed Hellenistic norms on the Jews, several centuries of religious tolerance under Hellenistic control came to an end. In 167 BCE, the Maccabean revolt erupted after Mattathias, a Jewish priest of the Hasmonean lineage, killed a Hellenized Jew and a Seleucid official who participated in sacrifice to the Greek gods in Modi'in. His son Judas Maccabeus defeated the Seleucids in several battles, and in 164 BCE, he captured Jerusalem and restored temple worship, an event commemorated by the Jewish festival of Hannukah. After Judas' death, his brothers Jonathan Apphus and Simon Thassi were able to establish and consolidate a vassal Hasmonean state in Judea, capitalizing on the Seleucid Empire's decline as a result of internal instability and wars with the Parthians, and by forging ties with the rising Roman Republic. Hasmonean leader John Hyrcanus was able to gain independence, doubling Judea's territories. He took control of Idumaea, where he converted the Edomites to Judaism, and invaded Scythopolis and Samaria, where he demolished the Samaritan Temple. Hyrcanus was also the first Hasmonean leader to mint coins. Under his sons, kings Aristobulus I and Alexander Jannaeus, Hasmonean Judea became a kingdom, and its territories continued to expand, now also covering the coastal plain, Galilee and parts of the Transjordan. Some scholars argue that the Hasmonean dynasty also institutionalized the final Jewish biblical canon. Under Hasmonean rule, the Pharisees, Sadducees and the mystic Essenes emerged as the principal Jewish social movements. The Pharisee sage Simeon ben Shetach is credited with establishing the first schools based around meeting houses. This was a key step in the emergence of Rabbinical Judaism. After Jannaeus' widow, queen Salome Alexandra, died in 67 BCE, her sons Hyrcanus II and Aristobulus II engaged in a civil war over succession. The conflicting parties requested Pompey's assistance on their behalf, which paved the way for a Roman takeover of the kingdom. In 63 BCE, the Roman Republic conquered Judaea, ending Jewish independence under the Hasmoneans. Roman general Pompey intervened in a dynastic civil war and, after capturing Jerusalem, reinstated Hyrcanus II as high priest but denied him the title of king. Rome soon installed the Herodian dynasty—of Idumean descent but Jewish by conversion—as a loyal replacement for the nationalist Hasmoneans. In 37 BCE, Herod the Great, the first client king of this line, took power after defeating the restored Hasmonean king Antigonus II Mattathias. Herod imposed heavy taxes, suppressed opposition, and centralized authority, which fostered widespread resentment. Herod also carried out major monumental construction projects throughout his kingdom, and significantly expanded the Second Temple, which he transformed into one of the largest religious structures in the ancient world. After his death in 4 BCE, his kingdom was divided among his sons into a tetrarchy under continued Roman oversight. In 6 CE, Roman emperor Augustus transformed Judaea into a Roman province, deposing its last Jewish ruler, Herod Archelaus, and appointing a Roman governor in his place. That same year, a census triggered a small uprising by Judas of Galilee, the founder of a movement that rejected foreign authority and recognized only God as king. Over the next six decades, with the brief exception of a short period of Jewish autonomy under the client king Herod Agrippa I, the province remained under direct Roman administration. Some governors ruled with brutality and showed little regard for Jewish religious sensitivities, deepening resentment among the local population. This discontent was also fueled by poor governance, corruption, and growing economic inequality, along with rising tensions between Jews and neighboring populations over ethnic, religious, and territorial disputes. At the same time, collective memory of the Maccabean revolt and the period of Hasmonean independence continued to inspire hopes for national liberation from Roman control. In 64 CE, the Temple High Priest Joshua ben Gamla introduced a religious requirement for Jewish boys to learn to read from the age of six. Over the next few hundred years this requirement became steadily more ingrained in Jewish tradition. The Jewish–Roman wars were a series of large-scale revolts by Jewish subjects against the Roman Empire between 66 and 135 CE. The term primarily applies to the First Jewish–Roman War (66–73 CE) and the Bar Kokhba revolt (132–136 CE), both nationalist rebellions aimed at restoring Jewish independence in Judea. Some sources also include the Diaspora Revolt (115–117 CE), an ethno-religious conflict fought across the Eastern Mediterranean and including the Kitos War in Judaea. The Jewish–Roman wars had a devastating impact on the Jewish people, transforming them from a major population in the Eastern Mediterranean into a dispersed and persecuted minority. The First Jewish-Roman War culminated in the destruction of Jerusalem and other towns and villages in Judaea, resulting in significant loss of life and a considerable segment of the population being uprooted or displaced. Those who remained were stripped of any form of political autonomy. Subsequently, the brutal suppression of the Bar Kokhba revolt resulted in even more severe consequences. Judea witnessed a significant depopulation, as many Jews were killed, expelled, or sold into slavery. The outcome of the conflict marked the termination of efforts to reestablish a Jewish state until the modern era. Jews were banned from residing in the vicinity of Jerusalem, which the Romans rebuilt into the pagan colony of Aelia Capitolina, and the province of Judaea was renamed Syria Palaestina. Collectively, these events enhanced the role of Jewish diaspora, relocating the Jewish demographic and cultural center to Galilee and eventually to Babylonia, with smaller communities across the Mediterranean, the Middle East, and beyond. The Jewish–Roman wars also had a major impact on Judaism, after the central worship site of Second Temple Judaism, the Second Temple in Jerusalem, was destroyed by Titus's troops in 70 CE. The destruction of the Temple led to a transformation in Jewish religious practices, emphasizing prayer, Torah study, and communal gatherings in synagogues. This pivotal shift laid the foundation for the emergence of Rabbinic Judaism, which has been the dominant form of Judaism since late antiquity, after the codification of the Babylonian Talmud. Late Roman and Byzantine periods As a result of the disastrous effects of the Bar Kokhba revolt, Jewish presence in the region significantly dwindled. Over the next centuries, more Jews left to communities in the Diaspora, especially the large, speedily growing Jewish communities in Babylonia and Arabia. Others remained in the Land of Israel, where the spiritual and demographic center shifted from the depopulated Judea to Galilee. Jewish presence also continued in the southern Hebron Hills, in Ein Gedi, and on the coastal plain. The Mishnah and the Jerusalem Talmud, huge compendiums of Rabbinical discussions, were compiled during the 2nd to 4th centuries CE in Tiberias and Jerusalem. Following the revolt, Judea's countryside was penetrated by pagan populations, including migrants from the nearby provinces of Syria, Phoenicia, and Arabia, whereas Aelia Capitolina, its immediate vicinity, and administrative centers were now inhabited by Roman veterans and settlers from the western parts of the empire. The Romans permitted a hereditary Rabbinical Patriarch from the House of Hillel, called the "Nasi", to represent the Jews in dealings with the Romans. One prominent figure was Judah ha-Nasi, credited with compiling the final version of the Mishnah, a vast collection of Jewish oral traditions. He also emphasized the importance of education in Judaism, leading to requirements that illiterate Jews be treated as outcasts. This might have contributed to some illiterate Jews converting to Christianity. Jewish seminaries, such as those at Shefaram and Bet Shearim, continued to produce scholars. The best of these became members of the Sanhedrin, which was located first at Sepphoris and later at Tiberias. In the Galillee, many synagogues have been found dating from this period, and the burial site of the Sanhedrin leaders was discovered in Beit She'arim. In the 3rd century, the Roman Empire faced an economic crisis and imposed heavy taxation to fund wars of imperial succession. This situation prompted additional Jewish migration from Syria Palaestina to the Sasanian Empire, known for its more tolerant environment; there, a flourishing Jewish community with important Talmudic academies thrived in Babylonia, engaging in a notable rivalry with the Talmudic academies of Palaestina. Early in the 4th century, the Emperor Constantine made Constantinople the capital of the East Roman Empire and made Christianity an accepted religion. His mother Helena made a pilgrimage to Jerusalem (326–328) and led the construction of the Church of the Nativity (birthplace of Jesus in Bethlehem), the Church of the Holy Sepulchre (burial site of Jesus in Jerusalem) and other key churches that still exist. The name Jerusalem was restored to Aelia Capitolina and became a Christian city. Jews were still banned from living in Jerusalem, but were allowed to visit and worship at the site of the ruined temple. Over the course of the next century Christians worked to eradicate "paganism", leading to the destruction of classical Roman traditions and eradication of their temples. In 351–2, another Jewish revolt in the Galilee erupted against a corrupt Roman governor. The Roman Empire split in 390 CE and the region became part of the Eastern Roman Empire, known as the Byzantine Empire. Under Byzantine rule, much of the region and its non-Jewish population were won over by Christianity, which eventually became the dominant religion in the region. The presence of holy sites drew Christian pilgrims, some of whom chose to settle, contributing to the rise of a Christian majority. Christian authorities encouraged this pilgrimage movement and appropriated lands, constructing magnificent churches at locations linked to biblical narratives. Additionally, monks established monasteries near pagan settlements, encouraging the conversion of local pagans. During the Byzantine period, the Jewish presence in the region declined, and it is believed that Jews lost their majority status in Palestine in the fourth century. While Judaism remained the sole non-Christian religion tolerated, restrictions on Jews gradually increased, prohibiting the construction of new places of worship, holding public office, or owning Christian slaves. In 425, after the death of the last Nasi, Gamliel VI, the Nasi office and the Sanhedrin were officially abolished, and the standing of yeshivot weakened. The leadership void was gradually filled by the Jewish center in Babylonia, which would assume a leading role in the Jewish world for generations after the Byzantine period. During the 5th and 6th centuries CE, the region witnessed a series of Samaritan revolts against Byzantine rule. Their suppression resulted in the decline of Samaritan presence and influence, and further consolidated Christian domination. Though it is acknowledged that some Jews and Samaritans converted to Christianity during the Byzantine period, the reliable historical records are limited, and they pertain to individual conversions rather than entire communities. In 611, Khosrow II, ruler of Sassanid Persia, invaded the Byzantine Empire. He was helped by Jewish fighters recruited by Benjamin of Tiberias and captured Jerusalem in 614. The "True Cross" was captured by the Persians. The Jewish Himyarite Kingdom in Yemen may also have provided support. Nehemiah ben Hushiel was made governor of Jerusalem. Christian historians of the period claimed the Jews massacred Christians in the city, but there is no archeological evidence of destruction, leading modern historians to question their accounts. In 628, Kavad II (son of Kosrow) returned Palestine and the True Cross to the Byzantines and signed a peace treaty with them. Following the Byzantine re-entry, Heraclius massacred the Jewish population of Galilee and Jerusalem, while renewing the ban on Jews entering the latter. Early Muslim period The Levant was conquered by an Arab army under the command of ʿUmar ibn al-Khaṭṭāb in 635, and became the province of Bilad al-Sham of the Rashidun Caliphate. Two military districts—Jund Filastin and Jund al-Urdunn—were established in Palestine. A new city called Ramlah was built as the Muslim capital of Jund Filastin, while Tiberias served as the capital of Jund al-Urdunn. The Byzantine ban on Jews living in Jerusalem came to an end. In 661, Mu'awiya I was crowned Caliph in Jerusalem, becoming the first of the (Damascus-based) Umayyad dynasty. In 691, Umayyad Caliph Abd al-Malik (685–705) constructed the Dome of the Rock shrine on the Temple Mount, where the two Jewish temples had been located. A second building, the Al-Aqsa Mosque, was also erected on the Temple Mount in 705. Both buildings were rebuilt in the 10th century following a series of earthquakes. In 750, Arab discrimination against non-Arab Muslims led to the Abbasid Revolution and the Umayyads were replaced by the Abbasid Caliphs who built a new city, Baghdad, to be their capital. This period is known as the Islamic Golden Age, the Arab Empire was the largest in the world and Baghdad the largest and richest city. Both Arabs and minorities prospered across the region and much scientific progress was made. There were however setbacks: During the 8th century, the Caliph Umar II introduced a law requiring Jews and Christians to wear identifying clothing. Jews were required to wear yellow stars round their neck and on their hats, Christians had to wear Blue. Clothing regulations arose during repressive periods of Arab rule and were more designed to humiliate then persecute non-Muslims. A poll tax was imposed on all non-Muslims by Islamic rulers and failure to pay could result in imprisonment or worse. In 982, Caliph Al-Aziz Billah of the Cairo-based Fatimid dynasty conquered the region. The Fatimids were followers of Isma'ilism, a branch of Shia Islam and claimed descent from Fatima, Mohammed's daughter. Around the year 1010, the Church of Holy Sepulchre (believed to be Jesus burial site), was destroyed by Fatimid Caliph al-Hakim, who relented ten years later and paid for it to be rebuilt. In 1020 al-Hakim claimed divine status and the newly formed Druze religion gave him the status of a messiah. Although the Arab conquest was relatively peaceful and did not cause widespread destruction, it did alter the country's demographics significantly. Over the ensuing several centuries, the region experienced a drastic decline in its population, from an estimated 1 million during Roman and Byzantine times to some 300,000 by the early Ottoman period. This demographic collapse was accompanied by a slow process of Islamization, that resulted from the flight of non-Muslim populations, immigration of Muslims, and local conversion. The majority of the remaining populace belonged to the lowest classes. While the Arab conquerors themselves left the area after the conquest and moved on to other places, the settlement of Arab tribes in the area both before and after the conquest also contributed to the Islamization. As a result, the Muslim population steadily grew and the area became gradually dominated by Muslims on a political and social level. During the early Islamic period, many Christians and Samaritans, belonging to the Byzantine upper class, migrated from the coastal cities to northern Syria and Cyprus, which were still under Byzantine control, while others fled to the central highlands and the Transjordan. As a result, the coastal towns, formerly important economic centers connected with the rest of the Byzantine world, were emptied of most of their residents. Some of these cities—namely Ashkelon, Acre, Arsuf, and Gaza—now fortified border towns, were resettled by Muslim populations, who developed them into significant Muslim centers. The region of Samaria also underwent a process of Islamization as a result of waves of conversion among the Samaritan population and the influx of Muslims into the area. The predominantly Jacobite Monophysitic Christian population had been hostile to Byzantium orthodoxy, and at times for that reason welcomed Muslim rule. There is no strong evidence for forced conversion, or that the jizya tax significantly affected such changes. The demographic situation in Palestine was further altered by urban decline under the Abbasids, and it is thought that the 749 earthquake hastened this process by causing an increase in the number of Jews, Christians, and Samaritans who emigrated to diaspora communities while also leaving behind others who remained in the devastated cities and poor villages until they converted to Islam. Historical records and archeological evidence suggest that many Samaritans converted under Abbasid and Tulunid rule, after suffering through severe difficulties such droughts, earthquakes, religious persecution, heavy taxes and anarchy. The same region also saw the settlement of Arabs. Over the period, the Samaritan population drastically decreased, with the rural Samaritan population converting to Islam, and small urban communities remaining in Nablus and Caesarea, as well as in Cairo, Damascus, Aleppo and Sarepta. Nevertheless, the Muslim population remained a minority in a predominantly Christian area, and it is likely that this status persisted until the Crusader period. Crusades and Mongols In 1095, Pope Urban II called upon Christians to wage a holy war and recapture Jerusalem from Muslim rule. Responding to this call, Christians launched the First Crusade in the same year, a military campaign aimed at retaking the Holy Land, ultimately resulting in the successful siege and conquest of Jerusalem in 1099. In the same year, the Crusaders conquered Beit She'an and Tiberias, and in the following decade, they captured coastal cities with the support of Italian city-state fleets, establishing these coastal ports as crucial strongholds for Crusader rule in the region. Following the First Crusade, several Crusader states were established in the Levant, with the Kingdom of Jerusalem (Regnum Hierosolymitanum) assuming a preeminent position and enjoying special status among them. The population consisted predominantly of Muslims, Christians, Jews, and Samaritans, while the Crusaders remained a minority and relied on the local population who worked the soil. The region saw the construction of numerous robust castles and fortresses, yet efforts to establish permanent European villages proved unsuccessful. Around 1180, Raynald of Châtillon, ruler of Transjordan, caused increasing conflict with the Ayyubid Sultan Saladin (Salah-al-Din), leading to the defeat of the Crusaders in the 1187 Battle of Hattin (above Tiberias). Saladin was able to peacefully take Jerusalem and conquered most of the former Kingdom of Jerusalem. Saladin's court physician was Maimonides, a refugee from Almohad (Muslim) persecution in Córdoba, Spain, where all non-Muslim religions had been banned. The Christian world's response to the loss of Jerusalem came in the Third Crusade of 1190. After lengthy battles and negotiations, Richard the Lionheart and Saladin concluded the Treaty of Jaffa in 1192 whereby Christians were granted free passage to make pilgrimages to the holy sites, while Jerusalem remained under Muslim rule. In 1229, Jerusalem peacefully reverted into Christian control as part of a treaty between Holy Roman Emperor Frederick II and Ayyubid sultan al-Kamil that ended the Sixth Crusade. In 1244, Jerusalem was sacked by the Khwarezmian Tatars who decimated the city's Christian population, drove out the Jews and razed the city. The Khwarezmians were driven out by the Ayyubids in 1247. Mamluk period Between 1258 and 1291, the area was the frontier between Mongol invaders (occasional Crusader allies) and the Mamluks of Egypt. The conflict impoverished the country and severely reduced the population. In Egypt a caste of warrior slaves, known as the Mamluks, gradually took control of the kingdom. The Mamluks were mostly of Turkish origin, and were bought as children and then trained in warfare. They were highly prized warriors, who gave rulers independence of the native aristocracy. In Egypt they took control of the kingdom following a failed invasion by the Crusaders (Seventh Crusade). The first Mamluk Sultan, Qutuz of Egypt, defeated the Mongols in the Battle of Ain Jalut ("Goliath's spring" near Ein Harod), ending the Mongol advances. He was assassinated by one of his Generals, Baibars, who went on to eliminate most of the Crusader outposts. The Mamluks ruled Palestine until 1516, regarding it as part of Syria. In Hebron, Jews were banned from worshipping at the Cave of the Patriarchs (the second-holiest site in Judaism); they were only allowed to enter 7 steps inside the site and the ban remained in place until Israel assumed control of the West Bank in the Six-Day War.[undue weight? – discuss] The Egyptian Mamluk sultan Al-Ashraf Khalil conquered the last outpost of Crusader rule in 1291. The Mamluks, continuing the policy of the Ayyubids, made the strategic decision to destroy the coastal area and to bring desolation to many of its cities, from Tyre in the north to Gaza in the south. Ports were destroyed and various materials were dumped to make them inoperable. The goal was to prevent attacks from the sea, given the fear of the return of the Crusaders. This had a long-term effect on those areas, which remained sparsely populated for centuries. The activity in that time concentrated more inland. With the 1492 expulsion of Jews from Spain and 1497 persecution of Jews and Muslims by Manuel I of Portugal, many Jews moved eastward, with some deciding to settle in the Mamluk Palestine. As a consequence, the local Jewish community underwent significant rejuvenation. The influx of Sephardic Jews began under Mamluk rule in the 15th century, and continued throughout the 16th century and especially after the Ottoman conquest. As city-dwellers, the majority of Sephardic Jews preferred to settle in urban areas, mainly in Safed but also in Jerusalem, while the Musta'arbi community comprised the majority of the villagers' Jews. Ottoman period Under the Mamluks, the area was a province of Bilad a-Sham (Syria). It was conquered by Turkish Sultan Selim I in 1516–17, becoming a part of the province of Ottoman Syria for the next four centuries, first as the Damascus Eyalet and later as the Syria Vilayet (following the Tanzimat reorganization of 1864). With the more favorable conditions that followed the Ottoman conquest, the immigration of Jews fleeing Catholic Europe, which had already begun under Mamluk rule, continued, and soon an influx of exiled Sephardic Jews came to dominate the Jewish community in the area. In 1558, Selim II (1566–1574), successor to Suleiman, whose wife Nurbanu Sultan was Jewish, gave control of Tiberias to Doña Gracia Mendes Nasi, one of the richest women in Europe and an escapee from the Inquisition. She encouraged Jewish refugees to settle in the area and established a Hebrew printing press. Safed became a centre for study of the Kabbalah and other Jewish religious studies, culminating with Joseph Karo's writing of the Shulchan Aruch – published in 1565 in Venice – which became the near-universal standard of Jewish religious law. Doña Nasi's nephew, Joseph Nasi, was made governor of Tiberias and he encouraged Jewish settlement from Italy. In 1660, a Druze power struggle led to the destruction of Safed and Tiberias. In the late 18th century a local Arab sheikh, Zahir al-Umar, created a de facto independent Emirate in the Galilee. Ottoman attempts to subdue the Sheikh failed, but after Zahir's death the Ottomans restored their rule in the area. In 1799, Napoleon briefly occupied the country and planned a proclamation inviting Jews to create a state. The proclamation was shelved following his defeat at Acre. In 1831, Muhammad Ali of Egypt, an Ottoman ruler who left the Empire and tried to modernize Egypt, conquered Ottoman Syria and imposed conscription, leading to the Arab revolt. In 1838, there was another Druze revolt. In 1839 Moses Montefiore met with Muhammed Pasha in Egypt and signed an agreement to establish 100–200 Jewish villages in the Damascus Eyalet of Ottoman Syria, but in 1840 the Egyptians withdrew before the deal was implemented, returning the area to Ottoman governorship. In 1844, Jews constituted the largest population group in Jerusalem. By 1896 Jews constituted an absolute majority in Jerusalem, but the overall population in Palestine was 88% Muslim and 9% Christian. Between 1882 and 1903, approximately 35,000 Jews moved to Palestine, known as the First Aliyah. In the Russian Empire, Jews faced growing persecution and legal restrictions. Half the world's Jews lived in the Russian Empire, where they were restricted to living in the Pale of Settlement. Severe pogroms in the early 1880s and legal repression led to 2 million Jews emigrating from the Russian Empire. 1.5 million went to the United States. Popular destinations were also Germany, France, the United Kingdom, the Netherlands, Argentina and Palestine. The Zionist movement began in earnest in 1882 with Leon Pinsker's pamphlet Auto-Emancipation, which argued for the creation of a Jewish national homeland as a means to avoid the violence plaguing Jewish communities in Eastern Europe. At the 1884 Katowice Conference, Russian Jews established the Bilu and Hovevei Zion ("Lovers of Zion") movements with the aim of settling in Palestine. In 1878, Russian Jewish emigrants established the village of Petah Tikva ("The Beginning of Hope"), followed by Rishon LeZion ("First to Zion") in 1882. The existing Ashkenazi communities were concentrated in the Four Holy Cities, extremely poor and relied on donations (halukka) from groups abroad, while the new settlements were small farming communities, but still relied on funding by the French Baron, Edmond James de Rothschild, who sought to establish profitable enterprises. Many early migrants could not find work and left, but despite the problems, more settlements arose and the community grew. After the Ottoman conquest of Yemen in 1881, a large number of Yemenite Jews also emigrated to Palestine, often driven by Messianism. In 1896 Theodor Herzl published Der Judenstaat (The Jewish State), in which he asserted that the solution to growing antisemitism in Europe (the so-called "Jewish Question") was to establish a Jewish state. In 1897, the World Zionist Organization was founded and the First Zionist Congress proclaimed its aim "to establish a home for the Jewish people in Palestine secured under public law." The Congress chose Hatikvah ("The Hope") as its anthem. Between 1904 and 1914, around 40,000 Jews settled in the area now known as Israel (the Second Aliyah). In 1908, the World Zionist Organization set up the Palestine Bureau (also known as the "Eretz Israel Office") in Jaffa and began to adopt a systematic Jewish settlement policy. In 1909, residents of Jaffa bought land outside the city walls and built the first entirely Hebrew-speaking town, Ahuzat Bayit (later renamed Tel Aviv). In 1915–1916, Talaat Pasha of the Young Turks forced around a million Armenian Christians from their homes in Eastern Turkey, marching them south through Syria, in what is now known as the Armenian genocide. The number of dead is thought to be around 700,000. Hundreds of thousands were forcibly converted to Islam. A community of survivors settled in Jerusalem, one of whom developed the now iconic Armenian pottery. During World War I, most Jews supported the Germans because they were fighting the Russians who were regarded as the Jews' main enemy. In Britain, the government sought Jewish support for the war effort for a variety of reasons including an antisemitic perception of "Jewish power" in the Ottoman Empire's Young Turks movement which was based in Thessaloniki, the most Jewish city in Europe (40% of the 160,000 population were Jewish). The British also hoped to secure American Jewish support for US intervention on Britain's behalf. There was already sympathy for the aims of Zionism in the British government, including the Prime Minister Lloyd George. Over 14,000 Jews were expelled by the Ottoman military commander from the Jaffa area in 1914–1915, due to suspicions they were subjects of Russia, an enemy, or Zionists wishing to detach Palestine from the Ottoman Empire, and when the entire population, including Muslims, of both Jaffa and Tel Aviv was subject to an expulsion order in April 1917, the affected Jews could not return until the British conquest ended in 1918, which drove the Turks out of Southern Syria. A year prior, in 1917, the British foreign minister, Arthur Balfour, sent a public letter to the British Lord Rothschild, a leading member of his party and leader of the Jewish community. The letter subsequently became known as the Balfour Declaration. It stated that the British Government "view[ed] with favour the establishment in Palestine of a national home for the Jewish people". The declaration provided the British government with a pretext for claiming and governing the country. New Middle Eastern boundaries were decided by an agreement between British and French bureaucrats. A Jewish Legion composed largely of Zionist volunteers organized by Ze'ev Jabotinsky and Joseph Trumpeldor participated in the British invasion. It also participated in the failed Gallipoli Campaign. The Nili Zionist spy network provided the British with details of Ottoman plans and troop concentrations. The Ottoman Empire chose to ally itself with Germany when the first war began. Arab leaders dreamed of freeing themselves from Ottoman rule and establishing self-government or forming an independent Arab state. Therefore, Britain contacted Hussein bin Ali of the Kingdom of Hejaz and proposed cooperation. Together they organized the Arab revolt that Britain supplied with very large quantities of rifles and ammunition. In cooperation between British artillery and Arab infantry, the city of Aqaba on the Red Sea was conquered. The Arab army then continued north while Britain attacked the ottomans from the sea. In 1917–1918, Jerusalem and Damascus were conquered from the ottomans. Britain then broke off cooperation with the Arab army. It turned out that Britain had already entered into the secret Sykes–Picot Agreement that meant that only Britain and France would be allowed to administer the land conquered from the Ottoman Empire. After pushing out the Ottomans, Palestine came under martial law. The British, French and Arab Occupied Enemy Territory Administration governed the area shortly before the armistice with the Ottomans until the promulgation of the mandate in 1920. Mandatory Palestine The British Mandate (in effect, British rule) of Palestine, including the Balfour Declaration, was confirmed by the League of Nations in 1922 and came into effect in 1923. The territory of Transjordan was also covered by the Mandate but under separate rules that excluded it from the Balfour Declaration. Britain signed a treaty with the United States (which did not join the League of Nations) in which the United States endorsed the terms of the Mandate, which was approved unanimously by both the U.S. Senate and House of Representatives. The Balfour declaration was published on the 2nd of November 1917 and the Bolsheviks seized control of Russia a week later. This led to civil war in the Russian Empire. Between 1918 and 1921, a series of pogroms led to the death of at least 100,000 Jews (mainly in what is now Ukraine), and the displacement as refugees of a further 600,000. This led to further migration to Palestine. Between 1919 and 1923, some 40,000 Jews arrived in Palestine in what is known as the Third Aliyah. Many of the Jewish immigrants of this period were Socialist Zionists and supported the Bolsheviks. The migrants became known as pioneers (halutzim), experienced or trained in agriculture who established self-sustaining communes called kibbutzim. Malarial marshes in the Jezreel Valley and Hefer Plain were drained and converted to agricultural use. Land was bought by the Jewish National Fund, a Zionist charity that collected money abroad for that purpose. After the French victory over the Arab Kingdom of Syria ended hopes of Arab independence, there were clashes between Arabs and Jews in Jerusalem during the 1920 Nebi Musa riots and in Jaffa the following year, leading to the establishment of the Haganah underground Jewish militia. A Jewish Agency was created which issued the entry permits granted by the British and distributed funds donated by Jews abroad. Between 1924 and 1929, over 80,000 Jews arrived in the Fourth Aliyah, fleeing antisemitism and heavy tax burdens imposed on trade in Poland and Hungary, inspired by Zionism and motivated by the closure of United States borders by the Immigration Act of 1924 which severely limited immigration from Eastern and Southern Europe. Pinhas Rutenberg, a former Commissar of St Petersburg in Russia's pre-Bolshevik Kerensky Government, built the first electricity generators in Palestine. In 1925, the Jewish Agency established the Hebrew University in Jerusalem and the Technion (technological university) in Haifa. British authorities introduced the Palestine pound (worth 1000 "mils") in 1927, replacing the Egyptian pound as the unit of currency in the Mandate. From 1928, the democratically elected Va'ad Leumi (Jewish National Council or JNC) became the main administrative institution of the Palestine Jewish community (Yishuv) and included non-Zionist Jews. As the Yishuv grew, the JNC adopted more government-type functions, such as education, health care, and security. With British permission, the Va'ad Leumi raised its own taxes and ran independent services for the Jewish population. In 1929, tensions grew over the Kotel (Wailing Wall), the holiest spot in the world for modern Judaism,[citation needed] which was then a narrow alleyway where the British banned Jews from using chairs or curtains: Many of the worshippers were elderly and needed seats; they also wanted to separate women from men. The Mufti of Jerusalem said it was Muslim property and deliberately had cattle driven through the alley.[citation needed] He alleged that the Jews were seeking control of the Temple Mount. This provided the spark for the August 1929 Palestine riots. The main victims were the (non-Zionist) ancient Jewish community at Hebron, who were massacred. The riots led to right-wing Zionists establishing their own militia in 1931, the Irgun Tzvai Leumi (National Military Organization, known in Hebrew by its acronym "Etzel"), which was committed to a more aggressive policy towards the Arab population. During the interwar period, the perception grew that there was an irreconciliable tension between the two Mandatory functions, of providing for a Jewish homeland in Palestine, and the goal of preparing the country for self-determination. The British rejected the principle of majority rule or any other measure that would give the Arab population, who formed the majority of the population, control over Palestinian territory. Between 1929 and 1938, 250,000 Jews arrived in Palestine (Fifth Aliyah). In 1933, the Jewish Agency and the Nazis negotiated the Ha'avara Agreement (transfer agreement), under which 50,000 German Jews would be transferred to Palestine. The Jews' possessions were confiscated and in return the Nazis allowed the Ha'avara organization to purchase 14 million pounds worth of German goods for export to Palestine and use it to compensate the immigrants. Although many Jews wanted to leave Nazi Germany, the Nazis prevented Jews from taking any money and restricted them to two suitcases so few could pay the British entry tax.[citation needed] The agreement was controversial and the Labour Zionist leader who negotiated the agreement, Haim Arlosoroff, was assassinated in Tel Aviv in 1933. The assassination was used by the British to create tension between the Zionist left and the Zionist right.[citation needed] Arlosoroff had been the boyfriend of Magda Ritschel some years before she married Joseph Goebbels. There has been speculation that he was assassinated by the Nazis to hide the connection but there is no evidence for it. Between 1933 and 1936, 174,000 arrived despite the large sums the British demanded for immigration permits: Jews had to prove they had 1,000 pounds for families with capital (equivalent to £85,824 in 2023), 500 pounds if they had a profession and 250 pounds if they were skilled labourers.[better source needed] Jewish immigration and Nazi propaganda contributed to the large-scale 1936–1939 Arab revolt in Palestine, a largely nationalist uprising directed at ending British rule. The head of the Jewish Agency, Ben-Gurion, responded to the Arab Revolt with a policy of "Havlagah"—self-restraint and a refusal to be provoked by Arab attacks in order to prevent polarization. The Etzel group broke off from the Haganah in opposition to this policy. The British responded to the revolt with the Peel Commission (1936–37), a public inquiry that recommended that an exclusively Jewish territory be created in the Galilee and western coast (including the population transfer of 225,000 Arabs); the rest becoming an exclusively Arab area. The two main Jewish leaders, Chaim Weizmann and David Ben-Gurion, had convinced the Zionist Congress to approve equivocally the Peel recommendations as a basis for more negotiation. The plan was rejected outright by the Palestinian Arab leadership and they renewed the revolt, which caused the British to abandon the plan as unworkable. Testifying before the Peel Commission, Weizmann said "There are in Europe 6,000,000 people ... for whom the world is divided into places where they cannot live and places where they cannot enter." In 1938, the US called an international conference to address the question of the vast numbers of Jews trying to escape Europe. Britain made its attendance contingent on Palestine being kept out of the discussion. No Jewish representatives were invited. The Nazis proposed their own solution: that the Jews of Europe be shipped to Madagascar (the Madagascar Plan). The agreement proved fruitless, and the Jews were stuck in Europe. With millions of Jews trying to leave Europe and every country closed to Jewish migration, the British decided to close Palestine. The White Paper of 1939, recommended that an independent Palestine, governed jointly by Arabs and Jews, be established within 10 years. The White Paper agreed to allow 75,000 Jewish immigrants into Palestine over the period 1940–44, after which migration would require Arab approval. Both the Arab and Jewish leadership rejected the White Paper. In March 1940 the British High Commissioner for Palestine issued an edict banning Jews from purchasing land in 95% of Palestine. Jews now resorted to illegal immigration: (Aliyah Bet or "Ha'apalah"), often organized by the Mossad Le'aliyah Bet and the Irgun. With no outside help and no countries ready to admit them, very few Jews managed to escape Europe between 1939 and 1945. Those caught by the British were mostly imprisoned in Mauritius. During the Second World War, the Jewish Agency worked to establish a Jewish army that would fight alongside the British forces. Churchill supported the plan but British military and government opposition led to its rejection. The British demanded that the number of Jewish recruits match the number of Arab recruits. In June 1940, Italy declared war on the British Commonwealth and sided with Germany. Within a month, Italian planes bombed Tel Aviv and Haifa, inflicting multiple casualties. In May 1941, the Palmach was established to defend the Yishuv against the planned Axis invasion through North Africa. The British refusal to provide arms to the Jews, even when Rommel's forces were advancing through Egypt in June 1942 (intent on occupying Palestine), and the 1939 White Paper led to the emergence of a Zionist leadership in Palestine that believed conflict with Britain was inevitable. Despite this, the Jewish Agency called on Palestine's Jewish youth to volunteer for the British Army. 30,000 Palestinian Jews and 12,000 Palestinian Arabs enlisted in the British armed forces during the war. In June 1944 the British agreed to create a Jewish Brigade that would fight in Italy. Approximately 1.5 million Jews around the world served in every branch of the allied armies, mainly in the Soviet and US armies. 200,000 Jews died serving in the Soviet army alone. A small group (about 200 activists), dedicated to resisting the British administration in Palestine, broke away from the Etzel (which advocated support for Britain during the war) and formed the "Lehi" (Stern Gang), led by Avraham Stern. In 1942, the USSR released the Revisionist Zionist leader Menachem Begin from the Gulag and he went to Palestine, taking command of the Etzel organization with a policy of increased conflict against the British. At about the same time Yitzhak Shamir escaped from the camp in Eritrea where the British were holding Lehi activists without trial, taking command of the Lehi (Stern Gang). Jews in the Middle East were also affected by the war. Most of North Africa came under Nazi control and many Jews were used as slaves. The 1941 pro-Axis coup in Iraq was accompanied by massacres of Jews. The Jewish Agency put together plans for a last stand in the event of Rommel invading Palestine (the Nazis planned to exterminate Palestine's Jews). Between 1939 and 1945, the Nazis, aided by local forces, led systematic efforts to kill every person of Jewish extraction in Europe (The Holocaust), causing the deaths of approximately 6 million Jews. A quarter of those killed were children. The Polish and German Jewish communities, which played an important role in defining the pre-1945 Jewish world, mostly ceased to exist. In the United States and Palestine, Jews of European origin became disconnected from their families and roots. As the Holocaust mainly affected Ashkenazi Jews, Sepharadi and Mizrahi Jews, who had been a minority, became a much more significant factor in the Jewish world. Those Jews who survived in central Europe, were displaced persons (refugees); an Anglo-American Committee of Inquiry, established to examine the Palestine issue, surveyed their ambitions and found that over 95% wanted to migrate to Palestine. In the Zionist movement the moderate Pro-British (and British citizen) Weizmann, whose son died flying in the RAF, was undermined by Britain's anti-Zionist policies. Leadership of the movement passed to the Jewish Agency in Palestine, now led by the anti-British Socialist-Zionist party (Mapai) led by David Ben-Gurion. The British Empire was severely weakened by the war. In the Middle East, the war had made Britain conscious of its dependence on Arab oil. Shortly after VE Day, the Labour Party won the general election in Britain. Although Labour Party conferences had for years called for the establishment of a Jewish state in Palestine, the Labour government now decided to maintain the 1939 White Paper policies. Illegal migration (Aliyah Bet) became the main form of Jewish entry into Palestine. Across Europe Bricha ("flight"), an organization of former partisans and ghetto fighters, smuggled Holocaust survivors from Eastern Europe to Mediterranean ports, where small boats tried to breach the British blockade of Palestine. Meanwhile, Jews from Arab countries began moving into Palestine overland. Despite British efforts to curb immigration, during the 14 years of the Aliyah Bet, over 110,000 Jews entered Palestine. By the end of World War II, the Jewish population of Palestine had increased to 33% of the total population. In an effort to win independence, Zionists now waged a guerrilla war against the British. The main underground Jewish militia, the Haganah, formed an alliance called the Jewish Resistance Movement with the Etzel and Stern Gang to fight the British. In June 1946, following instances of Jewish sabotage, such as in the Night of the Bridges, the British launched Operation Agatha, arresting 2,700 Jews, including the leadership of the Jewish Agency, whose headquarters were raided. Those arrested were held without trial. On 4 July 1946 a massive pogrom in Poland led to a wave of Holocaust survivors fleeing Europe for Palestine. Three weeks later, Irgun bombed the British Military Headquarters of the King David Hotel in Jerusalem, killing 91 people. In the days following the bombing, Tel Aviv was placed under curfew and over 120,000 Jews, nearly 20% of the Jewish population of Palestine, were questioned by the police. In the US, Congress criticized British handling of the situation and considered delaying loans that were vital to British post-war recovery. The alliance between Haganah and Etzel was dissolved after the King David bombings. Between 1945 and 1948, 100,000–120,000 Jews left Poland. Their departure was largely organized by Zionist activists under the umbrella of the semi-clandestine organization Berihah ("Flight"). Berihah was also responsible for the organized emigration of Jews from Romania, Hungary, Czechoslovakia and Yugoslavia, totalling 250,000 (including Poland) Holocaust survivors. The British imprisoned the Jews trying to enter Palestine in the Atlit detainee camp and Cyprus internment camps. Those held were mainly Holocaust survivors, including large numbers of children and orphans. In response to Cypriot fears that the Jews would never leave and because the 75,000 quota established by the 1939 White Paper had never been filled, the British allowed the refugees to enter Palestine at a rate of 750 per month. On 2 April 1947, the United Kingdom requested that the question of Palestine be handled by the General Assembly. The General Assembly created a committee, United Nations Special Committee on Palestine (UNSCOP), to report on "the question of Palestine". In July 1947 the UNSCOP visited Palestine and met with Jewish and Zionist delegations. The Arab Higher Committee boycotted the meetings. During the visit the British Foreign Secretary Ernest Bevin ordered that passengers from an Aliyah Bet ship, SS Exodus 1947, be sent back to Europe. The Holocaust surviving migrants on the ship were forcibly removed by British troops at Hamburg, Germany. The principal non-Zionist Orthodox Jewish (or Haredi) party, Agudat Israel, recommended to UNSCOP that a Jewish state be set up after reaching a religious status quo agreement with Ben-Gurion. The agreement granted an exemption from military service to a quota of yeshiva (religious seminary) students and to all Orthodox women, made the Sabbath the national weekend, guaranteed kosher food in government institutions and allowed Orthodox Jews to maintain a separate education system. The majority report of UNSCOP proposed "an independent Arab State, an independent Jewish State, and the City of Jerusalem", the last to be under "an International Trusteeship System". On 29 November 1947, in Resolution 181 (II), the General Assembly adopted the majority report of UNSCOP, but with slight modifications. The Plan also called for the British to allow "substantial" Jewish migration by 1 February 1948. Neither Britain nor the UN Security Council took any action to implement the recommendation made by the resolution and Britain continued detaining Jews attempting to enter Palestine. Concerned that partition would severely damage Anglo-Arab relations, Britain denied UN representatives access to Palestine during the period between the adoption of Resolution 181 (II) and the termination of the British Mandate. The British withdrawal was completed in May 1948. However, Britain continued to hold Jewish immigrants of "fighting age" and their families on Cyprus until March 1949. The General Assembly's vote caused joy in the Jewish community and anger in the Arab community. Violence broke out between the sides, escalating into civil war. From January 1948, operations became increasingly militarized, with the intervention of a number of Arab Liberation Army regiments inside Palestine, each active in a variety of distinct sectors around the different coastal towns. They consolidated their presence in Galilee and Samaria. Abd al-Qadir al-Husayni came from Egypt with several hundred men of the Army of the Holy War. Having recruited a few thousand volunteers, he organized the blockade of the 100,000 Jewish residents of Jerusalem. The Yishuv tried to supply the city using convoys of up to 100 armoured vehicles, but largely failed. By March, almost all Haganah's armoured vehicles had been destroyed, the blockade was in full operation, and hundreds of Haganah members who had tried to bring supplies into the city were killed. Up to 100,000 Arabs, from the urban upper and middle classes in Haifa, Jaffa and Jerusalem, or Jewish-dominated areas, evacuated abroad or to Arab centres eastwards. This situation caused the US to withdraw their support for the Partition plan, thus encouraging the Arab League to believe that the Palestinian Arabs, reinforced by the Arab Liberation Army, could put an end to the plan for partition. The British, on the other hand, decided on 7 February 1948 to support the annexation of the Arab part of Palestine by Transjordan. The Jordanian army was commanded by the British. David Ben-Gurion reorganized the Haganah and made conscription obligatory. Every Jewish man and woman in the country had to receive military training. Thanks to funds raised by Golda Meir from sympathisers in the United States, and Stalin's decision to support the Zionist cause, the Jewish representatives of Palestine were able to purchase important arms in Eastern Europe. Ben-Gurion gave Yigael Yadin the responsibility to plan for the announced intervention of the Arab states. The result of his analysis was Plan Dalet, in which Haganah passed from the defensive to the offensive. The plan sought to establish Jewish territorial continuity by conquering mixed zones. Tiberias, Haifa, Safed, Beisan, Jaffa and Acre fell, resulting in the flight of more than 250,000 Palestinian Arabs. On 14 May 1948, on the day the last British forces left Haifa, the Jewish People's Council gathered at the Tel Aviv Museum and proclaimed the establishment of a Jewish state, to be known as the State of Israel. State of Israel In 1948, following the 1947–1948 war in Mandatory Palestine, the Israeli Declaration of Independence sparked the 1948 Arab–Israeli War. This resulted in the 1948 Palestinian expulsion and flight from the land that the State of Israel came to control, and led to waves of Jewish immigration from other parts of the Middle East. The latter half of the 20th century saw further conflicts between Israel and its neighbouring Arab nations. In 1967, the Six-Day War erupted; in its aftermath, Israel captured and occupied the Golan Heights from Syria, the West Bank from Jordan, and the Gaza Strip and the Sinai Peninsula from Egypt. In 1973, the Yom Kippur War began with an attack by Egypt on the Israeli-occupied Sinai Peninsula. In 1979, the Egypt–Israel peace treaty was signed, based on the Camp David Accords. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian National Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite efforts to finalize the peace agreement, the conflict continues. Demographics See also Notes References Further reading External links Israeli settlementsTimeline, International law West BankJudea and Samaria Area Gaza StripHof Aza Regional Council
========================================
[SOURCE: https://www.theverge.com/netflix] | [TOKENS: 1529]
Netflix With nearly 150 million subscribers around the world, Netflix has a commanding lead in the streaming wars. But it’s also facing heavy competition from deep-pocketed conglomerates like Disney, Apple, and AT&T, and an ongoing wave of narrow, targeted streaming sites like CBS All Access and DC Universe, which can draw on popular existing franchises for original content. As fewer companies are willing to license out their films and shows to other streaming sites, Netflix is pouring billions of dollars annually into its own original content. Follow along with The Verge as we look at Netflix’s new films and shows, its evolving strategies against new entrants in the market, and how it’s leveraging its technological and marketing lead. I Am Frankelda — co-writer / directors Arturo and Roy Ambriz’s stop motion dark fantasy film about a girl with a strange connection to another dimension — has been acquired by Netflix and is slated to debut on the streamer sometime later this year. One night in the audience of Netflix’s most ambitious live show yet. Now, Paramount is offering to cover the $2.8 billion termination fee that Warner Bros. Discovery would owe Netflix for abandoning the $82.7 billion merger agreement. It’s also tossing in a $0.25 per share “ticking fee” that it would pay shareholders for every quarter its deal hasn’t closed beyond December 31st, 2026. [The Hollywood Reporter] There’s a lot to see in Netflix’s new trailer for the live-action One Piece’s upcoming second season, but the most surprising reveal here is a fresh look at Tony Tony Chopper’s (Mikaela Hoover) Walk Point form that turns him into a much more normal-looking reindeer. The show’s out March 10th. The next four episodes of Sesame Street are dropping soon on Netflix and will include a cameo from Miley Cyrus. Will they also go back to having a Letter and Number of the Day and a proper ending song? I won’t hold my breath, but I’ll still be bitter. You’ll be able to livestream the famous K-Pop group’s first concert in three years on March 21st, the day after the new BTS album comes out. You might want some coffee, though; the show starts at 7AM ET that day. [Netflix Tudum] Though Netflix originals The Pete Davidson Show and The White House with Michael Irvin look and feel like a podcast, they lack a lot of features podcast listeners are used to, like RSS feed downloads and chapter markers. Here’s my take. What the bidding war over Warner Bros. Discovery says about the future of Hollywood, with Puck’s Julia Alexander. As part of its earnings report released on Tuesday, Netflix also said it has launched AI-powered tools designed to “connect members with the most relevant titles for them to watch.” It also plans on building upon its AI advertising tools that already allow companies to blend Netflix’s IP with their ads. [Netflix] Netflix, in its Q4 shareholder letter, says that early results for its cloud-streamed TV games launched last year are “encouraging.” It’s going to expand the lineup in 2026 with games like its new FIFA football sim. (No mention of new games available on mobile.) [Netflix] Originally, the live-action movies were going to come to Netflix starting in 2027, but What’s on Netflix reports that they’re appearing starting this year, beginning with Megan 2.0 on January 26th. The films still stream on Peacock first before jumping to Netflix. [What's on Netflix] Matt Damon and Ben Affleck went on the Joe Rogan Experience to promote their new film The Rip, and ended up sharing some very depressing details about Netflix’s love of repeated exposition dumps and its approach to filmmaking in this age of constant distraction. As Variety reports: “The standard way to make an action movie that we learned was, you usually have three set pieces. One in the first act, one in the second, one in the third,” Damon explained. “You spend most of your money on that one in the third act. That’s your finale. And now they’re like, ‘Can we get a big one in the first five minutes? We want people to stay. And it wouldn’t be terrible if you reiterated the plot three or four times in the dialogue because people are on their phones while they’re watching.’” As part of a multiyear agreement, Sony Picture Entertainment films will stream on Netflix worldwide following their “full theatrical and home entertainment runs.” The deal is worth more than $7 billion, Deadline reports. Netflix already has Pay-1 rights in the US and other select territories. [Netflix] The Dealer will be a crime series following a casino dealer who starts gambling herself to fund her wedding. Squid Game creator Hwang Dong-hyuk will produce the show, though he isn’t writing or directing this time. As reported by Bloomberg, the revised deal would replace Netflix’s existing agreement to acquire WBD’s studio and streaming business in a cash and stock transaction. The rumored change comes as Paramount continues to press WBD to accept its “superior” $108 billion all-cash deal for the entire company. [Bloomberg] The new teaser trailer for the second season of Netflix’s live-action One Piece adaptation introduces the show’s take on Baroque Works, the superpowered crime syndicate fixated on toppling the Arabasta Kingdom and taking out anyone who gets in their way. The new season drops on March 10th. The team formerly at Boss Fight Entertainment, which worked on Netflix’s Squid Game mobile game before being shut down last year, has formed a new studio called Sunwise Games. The studio is working on a new, original game. “I created Sunwise because it’s the change I want to see in the games industry: clear scope, ethical monetization, and teams built to deliver reliably over time,” founder Irin Berry says in a statement. If you believe internet rumors claiming the Stranger Things finale had “two hours” cut from its two-hour runtime, and have ignored actors and others saying that the claim is fake, the show’s creators have responded. Asked about it by Variety, Matt Duffer said, “Obviously, that’s not a real thing,” while Ross Duffer added, “I don’t think there’s a single cut scene in the entire season.” [Variety] The main Stranger Things series is coming to an end when the final episode is released tomorrow, on New Year’s Eve at 8PM ET. Netflix released this new trailer for the finale, which is also going to be shown in theaters, where people can watch together regardless of their familiarity with “shipping” and/or fandom designation. In an Elite Daily interview ahead of this week’s penultimate release, series star Gaten Matarazzo referenced the “Stonathan” pairing and said he liked the idea behind “Steddie” fanfics. But saying the “Byler” shipping between Will and Mike, was “very funny. I see them as just very good friends,” seemingly went too far for some, with social media posts targeting the interviewer, and some even claiming it was a fake quote. Junichi Okada talks about the new Netflix series and how his role as action planner touches nearly every part of it. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/BBC_News#India] | [TOKENS: 8810]
Contents BBC News BBC News is an operational business division of the British Broadcasting Corporation (BBC) responsible for the gathering and broadcasting of news and current affairs in the UK and around the world. The department is the world's largest broadcast news organisation and generates about 120 hours of radio and television output each day, as well as online news coverage. The service has over 5,500 journalists working across its output including in 50 foreign news bureaus where more than 250 foreign correspondents are stationed. Deborah Turness has been the CEO of news and current affairs since September 2022. In 2019, it was reported in an Ofcom report that the BBC spent £136m on news during the period April 2018 to March 2019. BBC News' domestic, global and online news divisions are housed within the largest live newsroom in Europe, in Broadcasting House in central London. Parliamentary coverage is produced and broadcast from studios in London. Through BBC English Regions, the BBC also has regional centres across England and national news centres in Northern Ireland, Scotland and Wales. All nations and English regions produce their own local news programmes and other current affairs and sport programmes. The BBC is a quasi-autonomous corporation authorised by royal charter, making it operationally independent of the government. As of 2024, the BBC reaches an average of 450 million people per week, with the BBC World Service accounting for 320 million people. History This is London calling – 2LO calling. Here is the first general news bulletin, copyright by Reuters, Press Association, Exchange Telegraph and Central News. — BBC news programme opening during the 1920s The British Broadcasting Company broadcast its first radio bulletin from radio station 2LO on 14 November 1922. Wishing to avoid competition, newspaper publishers persuaded the government to ban the BBC from broadcasting news before 7 pm, and to force it to use wire service copy instead of reporting on its own. The BBC gradually gained the right to edit the copy and, in 1934, created its own news operation. However, it could not broadcast news before 6 p.m. until World War II. In addition to news, Gaumont British and Movietone cinema newsreels had been broadcast on the TV service since 1936, with the BBC producing its own equivalent Television Newsreel programme from January 1948. A weekly Children's Newsreel was inaugurated on 23 April 1950, to around 350,000 receivers. The network began simulcasting its radio news on television in 1946, with a still picture of Big Ben. Televised bulletins began on 5 July 1954, broadcast from leased studios within Alexandra Palace in London. The public's interest in television and live events was stimulated by Elizabeth II's coronation in 1953. It is estimated that up to 27 million people viewed the programme in the UK, overtaking radio's audience of 12 million for the first time. Those live pictures were fed from 21 cameras in central London to Alexandra Palace for transmission, and then on to other UK transmitters opened in time for the event. That year, there were around two million TV Licences held in the UK, rising to over three million the following year, and four and a half million by 1955. Television news, although physically separate from its radio counterpart, was still firmly under radio news' control in the 1950s. Correspondents provided reports for both outlets, and the first televised bulletin, shown on 5 July 1954 on the then BBC television service and presented by Richard Baker, involved his providing narration off-screen while stills were shown. This was then followed by the customary Television Newsreel with a recorded commentary by John Snagge (and on other occasions by Andrew Timothy). On-screen newsreaders were introduced a year later in 1955 – Kenneth Kendall (the first to appear in vision), Robert Dougall, and Richard Baker—three weeks before ITN's launch on 21 September 1955. Mainstream television production had started to move out of Alexandra Palace in 1950 to larger premises – mainly at Lime Grove Studios in Shepherd's Bush, west London – taking Current Affairs (then known as Talks Department) with it. It was from here that the first Panorama, a new documentary programme, was transmitted on 11 November 1953, with Richard Dimbleby becoming anchor in 1955. In 1958, Hugh Carleton Greene became head of News and Current Affairs. On 1 January 1960, Greene became Director-General. Greene made changes that were aimed at making BBC reporting more similar to its competitor ITN, which had been highly rated by study groups held by Greene. A newsroom was created at Alexandra Palace, television reporters were recruited and given the opportunity to write and voice their own scripts, without having to cover stories for radio too. On 20 June 1960, Nan Winton, the first female BBC network newsreader, appeared in vision. 19 September 1960 saw the start of the radio news and current affairs programme The Ten O'clock News. BBC2 started transmission on 20 April 1964 and began broadcasting a new show, Newsroom. The World at One, a lunchtime news programme, began on 4 October 1965 on the then Home Service, and the year before News Review had started on television. News Review was a summary of the week's news, first broadcast on Sunday, 26 April 1964 on BBC 2 and harking back to the weekly Newsreel Review of the Week, produced from 1951, to open programming on Sunday evenings–the difference being that this incarnation had subtitles for the deaf and hard-of-hearing. As this was the decade before electronic caption generation, each superimposition ("super") had to be produced on paper or card, synchronised manually to studio and news footage, committed to tape during the afternoon, and broadcast early evening. Thus Sundays were no longer a quiet day for news at Alexandra Palace. The programme ran until the 1980s – by then using electronic captions, known as Anchor – to be superseded by Ceefax subtitling (a similar Teletext format), and the signing of such programmes as See Hear (from 1981). On Sunday 17 September 1967, The World This Weekend, a weekly news and current affairs programme, launched on what was then Home Service, but soon-to-be Radio 4. Preparations for colour began in the autumn of 1967 and on Thursday 7 March 1968 Newsroom on BBC2 moved to an early evening slot, becoming the first UK news programme to be transmitted in colour – from Studio A at Alexandra Palace. News Review and Westminster (the latter a weekly review of Parliamentary happenings) were "colourised" shortly after. However, much of the insert material was still in black and white, as initially only a part of the film coverage shot in and around London was on colour reversal film stock, and all regional and many international contributions were still in black and white. Colour facilities at Alexandra Palace were technically very limited for the next eighteen months, as it had only one RCA colour Quadruplex videotape machine and, eventually two Pye plumbicon colour telecines–although the news colour service started with just one. Black and white national bulletins on BBC 1 continued to originate from Studio B on weekdays, along with Town and Around, the London regional "opt out" programme broadcast throughout the 1960s (and the BBC's first regional news programme for the South East), until it started to be replaced by Nationwide on Tuesday to Thursday from Lime Grove Studios early in September 1969. Town and Around was never to make the move to Television Centre – instead it became London This Week which aired on Mondays and Fridays only, from the new TVC studios. The BBC moved production out of Alexandra Palace in 1969. BBC Television News resumed operations the next day with a lunchtime bulletin on BBC1 – in black and white – from Television Centre, where it remained until March 2013. This move to a smaller studio with better technical facilities allowed Newsroom and News Review to replace back projection with colour-separation overlay. During the 1960s, satellite communication had become possible; however, it was some years before digital line-store conversion was able to undertake the process seamlessly. On 14 September 1970, the first Nine O'Clock News was broadcast on television. Robert Dougall presented the first week from studio N1 – described by The Guardian as "a sort of polystyrene padded cell"—the bulletin having been moved from the earlier time of 20.50 as a response to the ratings achieved by ITN's News at Ten, introduced three years earlier on the rival ITV. Richard Baker and Kenneth Kendall presented subsequent weeks, thus echoing those first television bulletins of the mid-1950s. Angela Rippon became the first female news presenter of the Nine O'Clock News in 1975. Her work outside the news was controversial at the time, appearing on The Morecambe and Wise Christmas Show in 1976 singing and dancing. The first edition of John Craven's Newsround, initially intended only as a short series and later renamed just Newsround, came from studio N3 on 4 April 1972. Afternoon television news bulletins during the mid to late 1970s were broadcast from the BBC newsroom itself, rather than one of the three news studios. The newsreader would present to camera while sitting on the edge of a desk; behind him staff would be seen working busily at their desks. This period corresponded with when the Nine O'Clock News got its next makeover, and would use a CSO background of the newsroom from that very same camera each weekday evening. Also in the mid-1970s, the late night news on BBC2 was briefly renamed Newsnight, but this was not to last, or be the same programme as we know today – that would be launched in 1980 – and it soon reverted to being just a news summary with the early evening BBC2 news expanded to become Newsday. News on radio was to change in the 1970s, and on Radio 4 in particular, brought about by the arrival of new editor Peter Woon from television news and the implementation of the Broadcasting in the Seventies report. These included the introduction of correspondents into news bulletins where previously only a newsreader would present, as well as the inclusion of content gathered in the preparation process. New programmes were also added to the daily schedule, PM and The World Tonight as part of the plan for the station to become a "wholly speech network". Newsbeat launched as the news service on Radio 1 on 10 September 1973. On 23 September 1974, a teletext system which was launched to bring news content on television screens using text only was launched. Engineers originally began developing such a system to bring news to deaf viewers, but the system was expanded. The Ceefax service became much more diverse before it ceased on 23 October 2012: it not only had subtitling for all channels, it also gave information such as weather, flight times and film reviews. By the end of the decade, the practice of shooting on film for inserts in news broadcasts was declining, with the introduction of ENG technology into the UK. The equipment would gradually become less cumbersome – the BBC's first attempts had been using a Philips colour camera with backpack base station and separate portable Sony U-matic recorder in the latter half of the decade. In 1980, the Iranian Embassy Siege had been shot electronically by the BBC Television News Outside broadcasting team, and the work of reporter Kate Adie, broadcasting live from Prince's Gate, was nominated for BAFTA actuality coverage, but this time beaten by ITN for the 1980 award. Newsnight, the news and current affairs programme, was due to go on air on 23 January 1980, although trade union disagreements meant that its launch from Lime Grove was postponed by a week. On 27 August 1981 Moira Stuart became the first African Caribbean female newsreader to appear on British television. By 1982, ENG technology had become sufficiently reliable for Bernard Hesketh to use an Ikegami camera to cover the Falklands War, coverage for which he won the "Royal Television Society Cameraman of the Year" award and a BAFTA nomination – the first time that BBC News had relied upon an electronic camera, rather than film, in a conflict zone. BBC News won the BAFTA for its actuality coverage, however the event has become remembered in television terms for Brian Hanrahan's reporting where he coined the phrase "I'm not allowed to say how many planes joined the raid, but I counted them all out and I counted them all back" to circumvent restrictions, and which has become cited as an example of good reporting under pressure. The first BBC breakfast television programme, Breakfast Time also launched during the 1980s, on 17 January 1983 from Lime Grove Studio E and two weeks before its ITV rival TV-am. Frank Bough, Selina Scott, and Nick Ross helped to wake viewers with a relaxed style of presenting. The Six O'Clock News first aired on 3 September 1984, eventually becoming the most watched news programme in the UK (however, since 2006 it has been overtaken by the BBC News at Ten). In October 1984, images of millions of people starving to death in the Ethiopian famine were shown in Michael Buerk's Six O'Clock News reports. The BBC News crew were the first to document the famine, with Buerk's report on 23 October describing it as "a biblical famine in the 20th century" and "the closest thing to hell on Earth". The BBC News report shocked Britain, motivating its citizens to inundate relief agencies, such as Save the Children, with donations, and to bring global attention to the crisis in Ethiopia. The news report was also watched by Bob Geldof, who would organise the charity single "Do They Know It's Christmas?" to raise money for famine relief followed by the Live Aid concert in July 1985. Starting in 1981, the BBC gave a common theme to its main news bulletins with new electronic titles–a set of computer-animated "stripes" forming a circle on a red background with a "BBC News" typescript appearing below the circle graphics, and a theme tune consisting of brass and keyboards. The Nine used a similar (striped) number 9. The red background was replaced by a blue from 1985 until 1987. By 1987, the BBC had decided to re-brand its bulletins and established individual styles again for each one with differing titles and music, the weekend and holiday bulletins branded in a similar style to the Nine, although the "stripes" introduction continued to be used until 1989 on occasions where a news bulletin was screened out of the running order of the schedule. In 1987, John Birt resurrected the practice of correspondents working for both TV and radio with the introduction of bi-media journalism. During the 1990s, a wider range of services began to be offered by BBC News, with the split of BBC World Service Television to become BBC World (news and current affairs), and BBC Prime (light entertainment). Content for a 24-hour news channel was thus required, followed in 1997 with the launch of domestic equivalent BBC News 24. Rather than set bulletins, ongoing reports and coverage was needed to keep both channels functioning and meant a greater emphasis in budgeting for both was necessary. In 1998, after 66 years at Broadcasting House, the BBC Radio News operation moved to BBC Television Centre. New technology, provided by Silicon Graphics, came into use in 1993 for a re-launch of the main BBC 1 bulletins, creating a virtual set which appeared to be much larger than it was physically. The relaunch also brought all bulletins into the same style of set with only small changes in colouring, titles, and music to differentiate each. A computer generated cut-glass sculpture of the BBC coat of arms was the centrepiece of the programme titles until the large scale corporate rebranding of news services in 1999. In November 1997, BBC News Online was launched, following individual webpages for major news events such as the 1996 Olympic Games, 1997 general election, and the death of Princess Diana. In 1999, the biggest relaunch occurred, with BBC One bulletins, BBC World, BBC News 24, and BBC News Online all adopting a common style. One of the most significant changes was the gradual adoption of the corporate image by the BBC regional news programmes, giving a common style across local, national and international BBC television news. This also included Newyddion, the main news programme of Welsh language channel S4C, produced by BBC News Wales. Following the relaunch of BBC News in 1999, regional headlines were included at the start of the BBC One news bulletins in 2000. The English regions did however lose five minutes at the end of their bulletins, due to a new headline round-up at 18:55. 2000 also saw the Nine O'Clock News moved to the later time of 22:00. This was in response to ITN who had just moved their popular News at Ten programme to 23:00. ITN briefly returned News at Ten but following poor ratings when head-to-head against the BBC's Ten O'Clock News, the ITN bulletin was moved to 22.30, where it remained until 14 January 2008. The retirement in 2009 of Peter Sissons and departure of Michael Buerk from the Ten O'Clock News led to changes in the BBC One bulletin presenting team on 20 January 2003. The Six O'Clock News became double headed with George Alagiah and Sophie Raworth after Huw Edwards and Fiona Bruce moved to present the Ten. A new set design featuring a projected fictional newsroom backdrop was introduced, followed on 16 February 2004 by new programme titles to match those of BBC News 24. BBC News 24 and BBC World introduced a new style of presentation in December 2003, that was slightly altered on 5 July 2004 to mark 50 years of BBC Television News. On 7 March 2005 director general Mark Thompson launched the "Creative Futures" project to restructure the organisation. The individual positions of editor of the One and Six O'Clock News were replaced by a new daytime position in November 2005. Kevin Bakhurst became the first Controller of BBC News 24, replacing the position of editor. Amanda Farnsworth became daytime editor while Craig Oliver was later named editor of the Ten O'Clock News. Bulletins received new titles and a new set design in May 2006, to allow for Breakfast to move into the main studio for the first time since 1997. The new set featured Barco videowall screens with a background of the London skyline used for main bulletins and originally an image of cirrus clouds against a blue sky for Breakfast. This was later replaced following viewer criticism. The studio bore similarities with the ITN-produced ITV News in 2004, though ITN uses a CSO Virtual studio rather than the actual screens at BBC News. BBC News became part of a new BBC Journalism group in November 2006 as part of a restructuring of the BBC. The then-Director of BBC News, Helen Boaden reported to the then-Deputy Director-General and head of the journalism group, Mark Byford until he was made redundant in 2010. On 18 October 2007, ED Mark Thompson announced a six-year plan, "Delivering Creative Futures" (based on his project begun in March 2005), merging the television current affairs department into a new "News Programmes" division. Thompson's announcement, in response to a £2 billion shortfall in funding, would, he said, deliver "a smaller but fitter BBC" in the digital age, by cutting its payroll and, in 2013, selling Television Centre. The various separate newsrooms for television, radio and online operations were merged into a single multimedia newsroom. Programme making within the newsrooms was brought together to form a multimedia programme making department. BBC World Service director Peter Horrocks said that the changes would achieve efficiency at a time of cost-cutting at the BBC. In his blog, he wrote that by using the same resources across the various broadcast media meant fewer stories could be covered, or by following more stories, there would be fewer ways to broadcast them. A new graphics and video playout system was introduced for production of television bulletins in January 2007. This coincided with a new structure to BBC World News bulletins, editors favouring a section devoted to analysing the news stories reported on. The first new BBC News bulletin since the Six O'Clock News was announced in July 2007 following a successful trial in the Midlands. The summary, lasting 90 seconds, has been broadcast at 20:00 on weekdays since December 2007 and bears similarities with 60 Seconds on BBC Three, but also includes headlines from the various BBC regions and a weather summary. As part of a long-term cost cutting programme, bulletins were renamed the BBC News at One, Six and Ten respectively in April 2008 while BBC News 24 was renamed BBC News and moved into the same studio as the bulletins at BBC Television Centre. BBC World was renamed BBC World News and regional news programmes were also updated with the new presentation style, designed by Lambie-Nairn. 2008 also saw tri-media introduced across TV, radio, and online. The studio moves also meant that Studio N9, previously used for BBC World, was closed, and operations moved to the previous studio of BBC News 24. Studio N9 was later refitted to match the new branding, and was used for the BBC's UK local elections and European elections coverage in early June 2009. A strategy review of the BBC in March 2010, confirmed that having "the best journalism in the world" would form one of five key editorial policies, as part of changes subject to public consultation and BBC Trust approval. After a period of suspension in late 2012, Helen Boaden ceased to be the Director of BBC News. On 16 April 2013, incoming BBC Director-General Tony Hall named James Harding, a former editor of The Times of London newspaper as Director of News and Current Affairs. From August 2012 to March 2013, all news operations moved from Television Centre to new facilities in the refurbished and extended Broadcasting House, in Portland Place. The move began in October 2012, and also included the BBC World Service, which moved from Bush House following the expiry of the BBC's lease. This new extension to the north and east, referred to as "New Broadcasting House", includes several new state-of-the-art radio and television studios centred around an 11-storey atrium. The move began with the domestic programme The Andrew Marr Show on 2 September 2012, and concluded with the move of the BBC News channel and domestic news bulletins on 18 March 2013. The newsroom houses all domestic bulletins and programmes on both television and radio, as well as the BBC World Service international radio networks and the BBC World News international television channel. BBC News and CBS News established an editorial and newsgathering partnership in 2017, replacing an earlier long-standing partnership between BBC News and ABC News. In an October 2018 Simmons Research survey of 38 news organisations, BBC News was ranked the fourth most trusted news organisation by Americans, behind CBS News, ABC News and The Wall Street Journal. In January 2020 the BBC announced a BBC News savings target of £80 million per year by 2022, involving about 450 staff reductions from the current 6,000. BBC director of news and current affairs Fran Unsworth said there would be further moves toward digital broadcasting, in part to attract back a youth audience, and more pooling of reporters to stop separate teams covering the same news. A further 70 staff reductions were announced in July 2020. BBC Three began airing the news programme The Catch Up in February 2022. It is presented by Levi Jouavel, Kirsty Grant, and Callum Tulley and aims to get the channel's target audience (16 to 34-year olds) to make sense of the world around them while also highlighting optimistic stories. Compared to its predecessor 60 Seconds, The Catch Up is three times longer, running for about three minutes and not airing during weekends. According to its annual report as of December 2021[update], India has the largest number of people using BBC services in the world. In May 2025, following the earthquake that hit Myanmar and Thailand, a television news bulletin (BBC News Myanmar) from the Burmese service using a vacated Voice of America satellite frequency began its broadcasts. Programming and reporting In November 2023, BBC News joined with the International Consortium of Investigative Journalists, Paper Trail Media [de] and 69 media partners including Distributed Denial of Secrets and the Organised Crime and Corruption Reporting Project (OCCRP) and more than 270 journalists in 55 countries and territories to produce the 'Cyprus Confidential' report on the financial network which supports the regime of Vladimir Putin, mostly with connections to Cyprus, and showed Cyprus to have strong links with high-up figures in the Kremlin, some of whom have been sanctioned. Government officials including Cyprus president Nikos Christodoulides and European lawmakers began responding to the investigation's findings in less than 24 hours, calling for reforms and launching probes. BBC News is responsible for the news programmes and documentary content on the BBC's general television channels, as well as the news coverage on the BBC News Channel in the UK, and 22 hours of programming for the corporation's international BBC World News channel. Coverage for BBC Parliament is carried out on behalf of the BBC at Millbank Studios, though BBC News provides editorial and journalistic content. BBC News content is also output onto the BBC's digital interactive television services under the BBC Red Button brand, and until 2012, on the Ceefax teletext system. The music on all BBC television news programmes was introduced in 1999 and composed by David Lowe. It was part of the re-branding which commenced in 1999 and features 'BBC Pips'. The general theme was used on bulletins on BBC One, News 24, BBC World and local news programmes in the BBC's Nations and Regions. Lowe was also responsible for the music on Radio One's Newsbeat. The theme has had several changes since 1999, the latest in March 2013. The BBC Arabic Television news channel launched on 11 March 2008, a Persian-language channel followed on 14 January 2009, broadcasting from the Peel wing of Broadcasting House; both include news, analysis, interviews, sports and highly cultural programmes and are run by the BBC World Service and funded from a grant-in-aid from the British Foreign Office (and not the television licence). The BBC Verify service was launched in 2023 to fact-check news stories, followed by BBC Verify Live in 2025. BBC Radio News produces bulletins for the BBC's national radio stations and provides content for local BBC radio stations via the General News Service (GNS), a BBC-internal news distribution service. BBC News does not produce the BBC's regional news bulletins, which are produced individually by the BBC nations and regions themselves. The BBC World Service broadcasts to some 150 million people in English as well as 27 languages across the globe. BBC Radio News is a patron of the Radio Academy. BBC News Online is the BBC's news website. Launched in November 1997, it is one of the most popular news websites, with 1.2 billion website visits in April 2021, as well as being used by 60% of the UK's internet users for news. The website contains international news coverage as well as entertainment, sport, science, and political news. Mobile apps for Android, iOS and Windows Phone systems have been provided since 2010. Many television and radio programmes are also available to view on the BBC iPlayer and BBC Sounds services. The BBC News channel is also available to view 24 hours a day, while video and radio clips are also available within online news articles. In October 2019, BBC News Online launched a mirror on the dark web anonymity network Tor in an effort to circumvent censorship. Criticism The BBC is required by its charter to be free from both political and commercial influence and answers only to its viewers and listeners. This political objectivity is sometimes questioned. For instance, The Daily Telegraph (3 August 2005) carried a letter from the KGB defector Oleg Gordievsky, referring to it as "The Red Service". Books have been written on the subject, including anti-BBC works like Truth Betrayed by W J West and The Truth Twisters by Richard Deacon. The BBC has been accused of bias by Conservative MPs. The BBC's Editorial Guidelines on Politics and Public Policy state that while "the voices and opinions of opposition parties must be routinely aired and challenged", "the government of the day will often be the primary source of news". The BBC is regularly accused by the government of the day of bias in favour of the opposition and, by the opposition, of bias in favour of the government. Similarly, during times of war, the BBC is often accused by the UK government, or by strong supporters of British military campaigns, of being overly sympathetic to the view of the enemy. An edition of Newsnight at the start of the Falklands War in 1982 was described as "almost treasonable" by John Page, MP, who objected to Peter Snow saying "if we believe the British". During the first Gulf War, critics of the BBC took to using the satirical name "Baghdad Broadcasting Corporation". During the Kosovo War, the BBC were labelled the "Belgrade Broadcasting Corporation" (suggesting favouritism towards the FR Yugoslavia government over ethnic Albanian rebels) by British ministers, although Slobodan Milosević (then FRY president) claimed that the BBC's coverage had been biased against his nation. Conversely, some of those who style themselves anti-establishment in the United Kingdom or who oppose foreign wars have accused the BBC of pro-establishment bias or of refusing to give an outlet to "anti-war" voices. Following the 2003 invasion of Iraq, a study by the Cardiff University School of Journalism of the reporting of the war found that nine out of 10 references to weapons of mass destruction during the war assumed that Iraq possessed them, and only one in 10 questioned this assumption. It also found that, out of the main British broadcasters covering the war, the BBC was the most likely to use the British government and military as its source. It was also the least likely to use independent sources, like the Red Cross, who were more critical of the war. When it came to reporting Iraqi casualties, the study found fewer reports on the BBC than on the other three main channels. The report's author, Justin Lewis, wrote "Far from revealing an anti-war BBC, our findings tend to give credence to those who criticised the BBC for being too sympathetic to the government in its war coverage. Either way, it is clear that the accusation of BBC anti-war bias fails to stand up to any serious or sustained analysis." Prominent BBC appointments are constantly assessed by the British media and political establishment for signs of political bias. The appointment of Greg Dyke as Director-General was highlighted by press sources because Dyke was a Labour Party member and former activist, as well as a friend of Tony Blair. The BBC's former Political Editor, Nick Robinson, was some years ago a chairman of the Young Conservatives and did, as a result, attract informal criticism from the former Labour government, but his predecessor Andrew Marr faced similar claims from the right because he was editor of The Independent, a liberal-leaning newspaper, before his appointment in 2000. Mark Thompson, former Director-General of the BBC, admitted the organisation has been biased "towards the left" in the past. He said, "In the BBC I joined 30 years ago, there was, in much of current affairs, in terms of people's personal politics, which were quite vocal, a massive bias to the left". He then added, "The organization did struggle then with impartiality. Now it is a completely different generation. There is much less overt tribalism among the young journalists who work for the BBC." Following the EU referendum in 2016, some critics suggested that the BBC was biased in favour of leaving the EU. For instance, in 2018, the BBC received complaints from people who took issue that the BBC was not sufficiently covering anti-Brexit marches while giving smaller-scale events hosted by former UKIP leader Nigel Farage more airtime. On the other hand, a poll released by YouGov showed that 45% of people who voted to leave the EU thought that the BBC was 'actively anti-Brexit' compared to 13% of the same kinds of voters who think the BBC is pro-Brexit. In 2008, the BBC Hindi was criticised by some Indian outlets for referring to the terrorists who carried out the 2008 Mumbai attacks as "gunmen". The response to this added to prior criticism from some Indian commentators suggesting that the BBC may have an Indophobic bias. In March 2015, the BBC was criticised for a BBC Storyville documentary interviewing one of the rapists in India. In spite of a ban ordered by the Indian High court, the BBC still aired the documentary "India's Daughter" outside India. BBC News was at the centre of a political controversy following the 2003 invasion of Iraq. Three BBC News reports (Andrew Gilligan's on Today, Gavin Hewitt's on The Ten O'Clock News and another on Newsnight) quoted an anonymous source that stated the British government (particularly the Prime Minister's office) had embellished the September Dossier with misleading exaggerations of Iraq's weapons of mass destruction capabilities. The government denounced the reports and accused the corporation of poor journalism. In subsequent weeks the corporation stood by the report, saying that it had a reliable source. Following intense media speculation, David Kelly was named in the press as the source for Gilligan's story on 9 July 2003. Kelly was found dead, by suicide, in a field close to his home early on 18 July. An inquiry led by Lord Hutton was announced by the British government the following day to investigate the circumstances leading to Kelly's death, concluding that "Dr. Kelly took his own life." In his report on 28 January 2004, Lord Hutton concluded that Gilligan's original accusation was "unfounded" and the BBC's editorial and management processes were "defective". In particular, it specifically criticised the chain of management that caused the BBC to defend its story. The BBC Director of News, Richard Sambrook, the report said, had accepted Gilligan's word that his story was accurate in spite of his notes being incomplete. Davies had then told the BBC Board of Governors that he was happy with the story and told the Prime Minister that a satisfactory internal inquiry had taken place. The Board of Governors, under the chairman's, Gavyn Davies, guidance, accepted that further investigation of the Government's complaints were unnecessary. Because of the criticism in the Hutton report, Davies resigned on the day of publication. BBC News faced an important test, reporting on itself with the publication of the report, but by common consent (of the Board of Governors) managed this "independently, impartially and honestly". Davies' resignation was followed by the resignation of Director General, Greg Dyke, the following day, and the resignation of Gilligan on 30 January. While undoubtedly a traumatic experience for the corporation, an ICM poll in April 2003 indicated that it had sustained its position as the best and most trusted provider of news. The BBC has faced accusations of holding both anti-Israel and anti-Palestine bias. Douglas Davis, the London correspondent of The Jerusalem Post, has described the BBC's coverage of the Arab–Israeli conflict as "a relentless, one-dimensional portrayal of Israel as a demonic, criminal state and Israelis as brutal oppressors [which] bears all the hallmarks of a concerted campaign of vilification that, wittingly or not, has the effect of delegitimising the Jewish state and pumping oxygen into a dark old European hatred that dared not speak its name for the past half-century.". However two large independent studies, one conducted by Loughborough University and the other by Glasgow University's Media Group concluded that Israeli perspectives are given greater coverage. Critics of the BBC argue that the Balen Report proves systematic bias against Israel in headline news programming. The Daily Mail and The Daily Telegraph criticised the BBC for spending hundreds of thousands of British tax payers' pounds from preventing the report being released to the public. Jeremy Bowen, the Middle East Editor for BBC world news, was singled out specifically for bias by the BBC Trust which concluded that he violated "BBC guidelines on accuracy and impartiality." An independent panel appointed by the BBC Trust was set up in 2006 to review the impartiality of the BBC's coverage of the Israeli–Palestinian conflict. The panel's assessment was that "apart from individual lapses, there was little to suggest deliberate or systematic bias." While noting a "commitment to be fair accurate and impartial" and praising much of the BBC's coverage the independent panel concluded "that BBC output does not consistently give a full and fair account of the conflict. In some ways the picture is incomplete and, in that sense, misleading." It notes that, "the failure to convey adequately the disparity in the Israeli and Palestinian experience, [reflects] the fact that one side is in control and the other lives under occupation". Writing in the Financial Times, Philip Stephens, one of the panellists, later accused the BBC's director-general, Mark Thompson, of misrepresenting the panel's conclusions. He further opined "My sense is that BBC news reporting has also lost a once iron-clad commitment to objectivity and a necessary respect for the democratic process. If I am right, the BBC, too, is lost". Mark Thompson published a rebuttal in the FT the next day. The description by one BBC correspondent reporting on the funeral of Yassir Arafat that she had been left with tears in her eyes led to other questions of impartiality, particularly from Martin Walker in a guest opinion piece in The Times, who picked out the apparent case of Fayad Abu Shamala, the BBC Arabic Service correspondent, who told a Hamas rally on 6 May 2001, that journalists in Gaza were "waging the campaign shoulder to shoulder together with the Palestinian people". Walker argues that the independent inquiry was flawed for two reasons. Firstly, because the time period over which it was conducted (August 2005 to January 2006) surrounded the Israeli withdrawal from Gaza and Ariel Sharon's stroke, which produced more positive coverage than usual. Furthermore, he wrote, the inquiry only looked at the BBC's domestic coverage, and excluded output on the BBC World Service and BBC World. Tom Gross accused the BBC of glorifying Hamas suicide bombers, and condemned its policy of inviting guests such as Jenny Tonge and Tom Paulin who have compared Israeli soldiers to Nazis. Writing for the BBC, Paulin said Israeli soldiers should be "shot dead" like Hitler's S.S, and said he could "understand how suicide bombers feel". The BBC also faced criticism for not airing a Disasters Emergency Committee aid appeal for Palestinians who suffered in Gaza during 22-day war there between late 2008 and early 2009. Most other major UK broadcasters did air this appeal, but rival Sky News did not. British journalist Julie Burchill has accused BBC of creating a "climate of fear" for British Jews over its "excessive coverage" of Israel compared to other nations. In light of the Gaza war, the BBC suspended seven Arab journalists over allegations of expressing support for Hamas via social media. BBC and ABC share video segments and reporters as needed in producing their newscasts. with the BBC showing ABC World News Tonight with David Muir in the UK. However, in July 2017, the BBC announced a new partnership with CBS News allows both organisations to share video, editorial content, and additional newsgathering resources in New York, London, Washington and around the world. BBC News subscribes to wire services from leading international agencies including PA Media (formerly Press Association), Reuters, and Agence France-Presse. In April 2017, the BBC dropped Associated Press in favour of an enhanced service from AFP. BBC News reporters and broadcasts are now and have in the past been banned in several countries primarily for reporting which has been unfavourable to the ruling government. For example, correspondents were banned by the former apartheid regime of South Africa. The BBC was banned in Zimbabwe under Mugabe for eight years as a terrorist organisation until being allowed to operate again over a year after the 2008 elections. The BBC was banned in Burma (officially Myanmar) after their coverage and commentary on anti-government protests there in September 2007. The ban was lifted four years later in September 2011. Other cases have included Uzbekistan, China, and Pakistan. BBC Persian, the BBC's Persian language news site, was blocked from the Iranian internet in 2006. The BBC News website was made available in China again in March 2008, but as of October 2014[update], was blocked again. In June 2015, the Rwandan government placed an indefinite ban on BBC broadcasts following the airing of a controversial documentary regarding the 1994 Rwandan genocide, Rwanda's Untold Story, broadcast on BBC2 on 1 October 2014. The UK's Foreign Office recognised "the hurt caused in Rwanda by some parts of the documentary". In February 2017, reporters from the BBC (as well as the Daily Mail, The New York Times, Politico, CNN, and others) were denied access to a United States White House briefing. In 2017, BBC India was banned for a period of five years from covering all national parks and sanctuaries in India. Following the withdrawal of CGTN's UK broadcaster licence on 4 February 2021 by Ofcom, China banned BBC News from airing in China. See also References External links |below = Category }}
========================================
[SOURCE: https://en.wikipedia.org/wiki/La_Familia_(Beitar_supporters%27_group)] | [TOKENS: 900]
Contents La Familia (Beitar supporters' group) La Familia is an ultras group which supports the Israeli Premier League football club Beitar Jerusalem. The group is known for its far-right, nationalist extremism and anti-Arab racism. Organization La Familia first organized in 2005 and congregated in the eastern sections of Teddy Stadium. Estimates of the group's size varies with a reporter putting the number at a few hundred, while a leader said that it encompassed a network of 3,000 supporters. In 2008, a BBC correspondent said that the group was about 20% of the crowd. They are the most vocal in the stadium, and some local fans follow their gameday chants. The club's history is intertwined with the Betar Zionist youth movement, and has since been supported by several Israeli politicians on the political right. La Familia has similarly been labeled far-right and is openly against those they view as being on the left. Beitar has publicly condemned the group, going as far as barring its supporters from a match. Some Beitar fans have expressed embarrassment over the organization and openly oppose their ideals. Incidents The group is notorious for chants insulting Arab players, and for displaying the flag of the banned Kach party. Chants with lines including "death to the Arabs" and "Muhammad is a homosexual" are common. During a December 2007 Toto Cup semi-final game between Beitar Jerusalem and the Israeli-Arab team Bnei Sakhnin, La Familia sang provocative chants insulting the Islamic prophet Muhammad. The Israel Football Association (IFA) punished Beitar by forcing them to play their next game against Sakhnin with no fans present. Vandals set fire to the IFA's offices and left graffiti threatening the life of the IFA chairman. The graffiti included the initials "LF" for La Familia, but the group denied involvement. Bnei Sakhnin is the only Arab-Israeli club in the Premier League. Matches between the clubs often result in violence and arrests. Beitar was disciplined in 2008 after fans disrupted a minute of silence to mark the death of Prime Minister Yitzhak Rabin. Later that year La Familia led a pitch invasion in what would have been a title-clinching win against Maccabi Herzliya. The IFA deducted two points from Beitar and ordered that the next game be played behind closed doors. In December 2011, fans yelled "Give Toto a banana" towards Nigerian-born Toto Tamuz. The IFA again punished Beitar with a two-point deduction and another game in an empty stadium. Supporters stormed the Malha Mall after a match in March 2012 while chanting "Death to Arabs". It was reported that Arab workers were harassed and beaten. A few months later, a group of Beitar fans attacked a McDonald's where Arabs were among the staff. The group was adamantly against the signing of Nigerian Muslim Ibrahim Nadallah who lasted half a season in 2005 and discouraged Muslims from joining the team, stating "the extremists won't change". The 2013 transfer of two Chechen Muslims, Dzhabrail Kadiyev and Zaur Sadayev raised anger from the supporters. Members of La Familia set a team office on fire after the announcement. Fans walked out of a match in March that saw Sadayev score his first goal for Beitar. As of December 2020, no Arabs have ever played for Beitar, unlike other top Israeli clubs, which is attributed to the group's protests. During the second qualifying round of the Europa League, on 16 July 2015 at Sporting Charleroi, the game was delayed for three minutes due to the unruly behavior of the Israeli supporters when they threw flares on to the game's field and the Charleroi goalkeeper, Nicolas Penneteau, was hit by an object. This caused the owner, Eli Tabib, to decide to leave the club. In 2016, undercover police infiltrated La Familia over a one month period, resulting in 56 arrests including nine soldiers and two minors on suspicion of selling weapons and violence. On 12 October 2023, dozens of La Familia members attempted to enter the Sheba Medical Center in Tel HaShomer after reports emerged that a Hamas terrorist from the Gaza war had been treated there. Three members were arrested by police. References
========================================
[SOURCE: https://www.wired.com/story/inside-the-gay-tech-mafia/] | [TOKENS: 15557]
Zoë BernardThe Big StoryFeb 19, 2026 6:00 AMInside the Gay Tech MafiaGay men have long been rumored to run Silicon Valley. WIRED investigates.ILLUSTRATION: SAM WHITNEY; GETTY IMAGESCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyNo one can say exactly when, or if, gay men started running Silicon Valley. They seem to have dominated its upper ranks at least the past five years, maybe more. On platforms like X, the clues are there: whispers of private-island retreats, tech executives going “gay for clout,” and the suggestion that a “seed round” is not, strictly speaking, a financial term. It is an idea so taken for granted, in fact, that when I call up a well-connected hedge fund manager to ask his thoughts about what is sometimes referred to in industry circles as the “gay tech mafia,” he audibly yawns. “Of course,” he says. “This has always been the case.”It had been the case, the hedge funder says, back in 2012, when he was raising money from a venture capitalist whose office was staffed with dozens of “attractive, strong young men,” all of whom were “under 30” and looked as though they had freshly decamped from “the high school debate club.” “They were all sleeping with each other and starting companies,” he says. And it is absolutely the case now, he adds, when gay men are running influential companies in Silicon Valley and maintain entire social calendars with scarcely a straight man, much less a woman, in sight. “Of course the gay tech mafia exists,” he continues. “This is not some Illuminati conspiracy theory. And you do not have to be gay to join. They like straight guys who sleep with them even more.”Ever since I started covering Silicon Valley in 2017, I’ve heard variations of this rumor—that “gays,” as an AI founder named Emmett Chen-Ran has quipped, “run this joint.” On its face, a gay tech mafia seemed too dumb to warrant actual investigative inquiry. Sure, there were gay men in high places: Peter Thiel, Tim Cook, Sam Altman, Keith Rabois, the list went on. But the idea that they were operating some kind of shadowy cabal seemed born entirely of homophobia, the indulgence of which might play into the hands of conspiracy-minded conservatives like Laura Loomer, who, in 2024, tweeted that the “high tech VC world just seems to be one big, exploitative gay mafia.”Over time, though, the rumor refused to die, eventually curdling into something closer to conventional wisdom. Last spring, at a venture capitalist’s party in Southern California, a middle-aged investor complained to me at length about how he was struggling to raise his new fund. The problem, he explained, boiled down to discrimination. I took him in as he spoke. He had the uniform down cold: a white man with a crew cut, wearing a tasteless button-down stretched over mild prosperity, and a fluent conviction that AI was, thank god, the next big thing. He looked exactly like the sort of man Silicon Valley has been built to reward. And yet here he was, insisting that the system was rigged against him. “If I were gay, I wouldn’t be having any trouble,” he said. “That’s the whole thing with Silicon Valley these days. The only way to catch a break,” he claimed, “is if you’re gay.”Over the course of 2025, similar sentiments bubbled up on X, where Silicon Valley tech workers joked about offering “fractional vizier services to the gay elite.” Anonymous accounts hinted at an underworld of gay Silicon Valley power brokers who influenced and courted—“groomed”—aspiring entrepreneurs. At an AI conference in Los Angeles, an engineer casually referred to a top AI firm’s offices, more than once, as “twink town.”By the fall, speculation intensified, and then a photo appeared on X of a group of Y Combinator–backed founders crowded near a sauna with Garry Tan, the incubator’s president. The image seemed innocuous enough: a few young, nerdy men in swim trunks, squinting into the camera. But almost instantly, it set off a round of viral gossip about the peculiar intimacies of venture capital culture. Not long after, a founder from Germany, Joschua Sutee, posted a photo of himself and his male cofounders—apparently naked, swaddled in bedsheets—submitted as part of what seemed to be a Y Combinator application, a move that appeared designed to court a knowingly erotic male audience. “Here I come, @ycombinator,” the caption read.The notion that Y Combinator was grooming male entrepreneurs makes little sense—for lots of reasons, and for one in particular. “Garry is straight straight straight straight,” says a person who knows Tan. “But he believes in the benefits of the sauna.” When I ask Tan for a comment, he is blunt—some founders were over for dinner and asked to use his recently installed sauna and cold plunge. From there, Tan says, “rejects” of Y Combinator “manufactured this meme that it was somehow more than that.”And yet, similar rumors persisted and compounded, originating as often from outsiders (sometimes with dubious political motivations) as from insiders. When I call up my longtime industry sources to get their thoughts on the gay tech mafia, not only have they heard of it—they have highly specific notions of how it works. These are credible people who believe seemingly incredible things. One San Francisco investor tells me that he believes the Thiel Fellowship is a training ground for gay industry leaders. (When I run this notion past a couple of former Thiel Fellows, they tell me they met Thiel one time at a dinner, where he appeared “slightly bored,” says one of the fellows, a straight man. “I mean, I wish Peter tried to groom me.”) Meanwhile, people’s gaydars are practically overheating. I hear, more than once, that anyone in Silicon Valley who has achieved outsize success is probably gay.Isn’t it strange, one San Francisco–based venture capitalist muses, how a certain defense-tech executive achieved so much success at a relatively young age? “Isn’t he gay?” the VC asks. “He must be.” I tell him he is mistaken—the executive is married to a woman. “Sure,” he replies. “But have you ever seen them together?” Another entrepreneur who raised capital from two well-known gay investors tells me that he’s accustomed to fielding scrutiny about his sexual orientation. “People say I’m gay,” he says. “There’s always jokes. Like, ‘How’d you get the money, bro?’”Then there are the anonymous X accounts amplifying allegations of misconduct. Their posts are calibrated for attention: detailed enough to suggest insider knowledge of the Valley, vague enough to invite darker interpretations. I take the bait and, one afternoon in late November, spend nearly an hour texting one such account owner over Signal who agrees to speak to me only if I keep his handle secret.This person describes the Valley as a place known for “ecstasy, psychedelic fueled gay sex stuff.” Has he experienced any of it himself? No. But he knows people who have—people who are “pretty afraid” and “young af.” He won’t name names, won’t connect me to anyone, but he swears that any negative rumor I’ve heard about gay men in Silicon Valley is true. He suggests a conspiracy so sprawling it rivals QAnon and implicates the entire US government. He gives me vague reporting advice: “It should be easy to find. 2nd page of Google type thing.”Finally, frustrated by his evasiveness, I ask what he thinks will happen if he tells me what he knows. “I truly believe,” he says, “killed.” Then he offers a suggestion. The only way to expose this blockbuster of a tale is “project veritas style: Take a 20 year old dude, make an X acc[ount]. Send him to the right places in SF and you’ll break the story if you go deep enough.”ILLUSTRATION: SAM WHITNEY; GETTY IMAGESThe problem with conspiracy theories, even offensive ones, is that they are rarely wholly invented. They almost always arise from some fragment of truth, which imagination then contorts. The difficulty with this particular rumor is that, while I was unable to substantiate darker allegations, parts of the story still resonate. In conversations with 51 people—31 of them gay men, many of them influential investors and entrepreneurs—a portrait emerged of gay influence in Silicon Valley that is intricate, layered, and often contradictory. It is a world in which power, desire, and ambition interweave in ways both visible and unseen, a world that is, in some ways, far richer—and more complicated—than the rumors themselves suggest.Most of the people who speak to me for this story do so on the condition that their names be kept confidential. Some of it is just garden-variety caution. “It may not be wise for me to be talking to a reporter describing all these parties,” says one, “because people would be like, Geez, why would we invite you?” Other excuses are murkier: “It’s not so safe to speak about this in too much detail,” says a founder who works in AI. “Anyone involved is an operator or a VC, and it might lead people to wonder about who is getting advantages.” Amid the deflections and whispers, though, there seems to be an unmistakable truth: Gay men are rising.“The gays who work in tech are succeeding vastly,” an angel investor, who is a gay man, tells me. “There’s the founder group of gays who all hang out with each other, because the gays always cluster together. By virtue of that, they become friends and vacation together.” Even more importantly: “They support each other, whether that’s to hire someone or angel invest in their companies or lead their funding rounds.”Some of these networks have begun to spill into public view. There is a Substack called Friend Of, written by Jack Randall, who formerly worked in communications at Robinhood, that chronicles gay ascendence into the centers of power. “We run the tech mafia (see Apple, OpenAI),” Randall writes. “We hold top government posts (see the Treasury Secretary). We anchor primetime news and the NYE Ball Drop. Our dating app’s stock outperforms its straight peers. And in the US, gay men are, on average, better educated and wealthier than the general population.”A new company called Sector aims to formalize this network. Founded by Brian Tran, a former designer in residence at Kleiner Perkins, Sector has a website that features photos of handsome men on beaches and at dimly lit dinners. One member describes it to me as a curated network where introductions unfold between well-heeled gay men with shared interests. “It’s up to you to decide,” the member tells me. “Is this professional, is it platonic, or is it something romantic?” In an interview with Randall, Tran said, “I think we could displace Grindr in the coming years.”On any given week in San Francisco, Partiful invites float around the community. If there is a “regular Halloween party, the gays will have their own Halloween party, and Sam Altman will be there,” says Jayden Clark, a straight podcaster who hosts a tech culture podcast and was not invited to the gay Halloween party. (Altman attended dressed as Spider-Man, a nod to Andrew Garfield, who played the superhero and has since been cast as Altman in an upcoming film.) I hear of not one but two White Lotus–themed gay tech parties, both equally extravagant. “Girls are not present,” says that same angel investor. “They are just not there.” There is also a “Gay VC Mafia” group chat that is, as one member describes it, “60 percent business” and “40 percent hee hee ha ha” about “classically gay topics.” With a steady churn of tech events aimed at gay men, the social incentives stack up fast. Connections blur—“professional, physical, or sometimes romantic,” as an AI founder puts it. The pull of this bubble is so strong, he continues, that it’s “an uphill battle to socialize with straight people.”None of this is necessarily unfamiliar in the clubby world of Silicon Valley, where the smart, successful, and wildly rich have always formed in-groups. There’s the so-called OpenAI mafia and the Airbnb mafia, and before those the PayPal mafia—alumni of moonshot companies who bankroll the next wave of startups. So some of what reads as advantage is, on closer inspection, structural and unremarkable. San Francisco combines two things in unusual density: one of the country’s largest gay populations and a tech industry that has reshaped global power. “For sure, gay men are overrepresented and have had an unbelievable run in the Bay Area,” says Mark, another gay entrepreneur who runs an AI startup. “In a city that has the most venture capital in the world, it isn’t surprising that this money is going directly to gay men.” (This perception, for what it’s worth, runs counter to statistics: Between 2000 and 2022, the years for which data is available, only 0.5 percent of startup venture funding went to LGBTQ+ founders.) “It’s not that there is some kind of gay mafia,” Mark continues. “But if I told you who are my friends that I want to invest in, they happen to be gays. Who are the people without kids who can grind away on the weekends? It’s the gays.” (Sources identified in this story by a first name only, like Mark, preferred the use of pseudonyms.)Imagine this, Mark says: You are a young, nerdy, closeted gay man. You grow up never quite fitting in. Your parents start asking questions. Why don’t you have a girlfriend? You tell them you’re too busy for a relationship. Eventually, you move to San Francisco, a city that, as one person puts it, is like “Disneyland for gay men.” Your world opens up. You meet other people like you—men who are openly out, many for the first time in their lives. These men happen to be working at influential companies. They are building technology that is astonishing. And slowly it dawns on you: Maybe you, too—a person who has spent a lifetime overlooked and underestimated—can build something extraordinary. “Gays feel,” Mark says, “that they have something to prove.”This is, more or less, the nature of how power and money have moved throughout networks since the dawn of time. And gay networks seem naturally aligned to the dynamics of venture funding, where established wealth meets emerging talent. “One of the key things to realize is that gays are different than straights in many different ways,” says a longtime gay venture capitalist. “Gays are cross-generational.” While straight people tend to spend more time with people their own age, “that is not true with gay men. I can hang out with someone at an event who is 18 years old, and Peter [Thiel] might also be there.”Just because you are gay and work in tech does not necessarily mean you are part of the so-called gay tech mafia. Much of the queer spectrum is conspicuously absent from events geared toward gay founders. “There are barriers within the community,” says Danny Gray, a leader at Out Professionals, a networking organization for LGBTQ+ businesspeople. “Cis gay men are the biggest gay group within the acronym, and it is much harder for other letters.” Lesbians tend to be sidelined; when I ask the hyperconnected tech journalist Kara Swisher about the gay tech mafia, she says she wasn’t aware there was one. And even if you are a gay man, inclusion is not necessarily guaranteed. “I’ve found it hard to break into this group myself,” one gay investor tells me. “I probably need to lose 20 pounds.”It may be that what outsiders perceive as the gay tech mafia is not gay people working in tech, or even, broadly speaking, gay men, but a small, self-selecting group with shared politics and sensibilities. They are assumed to prize aesthetics and the masculine physique, scorn identity politics, reject DEI in favor of MEI—“merit, excellence, and intelligence”—and lean right-wing, if not MAGA. I’ve heard straight entrepreneurs describe them as “the Greco-Roman gays,” part of “an insular, hypermasculine culture” in which “women are seen as totally redundant and completely unnecessary.” (A woman who once worked for a gay Republican startup founder describes it like this: “You get about the same amount of misogyny, but not the sexual harassment. So that’s nice.”)Where, then, might these almighty power gays be observed in their natural habitat? This is one of the guiding questions in my research, the answer to which perpetually evades me. When I ask a gay investor if perhaps I can attend one of these parties as a fly-on-the-wall observer, he tells me no, because it would be weird, given that I am—unfortunately for the purposes of this story—a woman. “People will be like, ‘Is that your sister?’” he says. I float an idea past my editor that I attend a party disguised as a man. Perhaps, I suggest, we should discuss the budget for my makeover? While not entirely disinterested in the idea, my editor offers another suggestion, that he—a gay man—come along as a kind of chaperone, “for safety” purposes. Neither of us revisits the idea.ILLUSTRATION: SAM WHITNEYThere is one place, though, that is mentioned again and again: Barry’s, the fitness bootcamp, which has become a gay mecca, thanks in part to the high-profile investor Keith Rabois, who has long been one of its most avid devotees, to the point of teaching occasional classes. And one Barry’s in particular keeps coming up: “The Barry’s in the Castro is ranked supreme,” says that same gay angel investor. “It is all guys, all gays, and everyone has abs.” (“From what I’ve learned working here, gay men do love to work out,” confirms a female employee at the Castro Barry’s.)The fact is, most people seem eager to talk about this, no deceptions on my part necessary. Many of them reply almost immediately to my vague inquiries. Even more surprising is their willingness to talk at length. Calls often run for hours, blending measured observations about life in a masculine-dominated culture with tours through the most salacious industry intrigue of my entire career. There can be an edge to the gossip, though—an implication that one of the most reliable paths to power in Silicon Valley may run through the bedroom. Some men are eager to hop on a call to ask what I may or may not have already heard about them. One gay founder tells me how a rumor has been circulating (a version of which I have, in fact, heard) that he and his husband slept with a gay investor in exchange for a down payment on their home. “Do people really think,” he wonders, “that we can’t afford a condo?”Many have, at some point or another, been suspected of romantic involvement, even if they’ve never been in the same room together. When I call up Ben Ling, an investor and early Google employee, to ask about long-standing speculation that he might be a good match for Tim Cook—a pairing intriguing enough to be referenced in The Atlantic—he laughs. “People make up these rumors because they have nothing better to do,” he says. “Tim Cook does not know who I am.”And while it is true that at least some of these men know and see each other socially, these meetups do not reliably lead to romance. A friend of Rabois tells me that Rabois likes to tell a story of the time, years earlier, when he invited Sam Altman as his plus-one to an event. “He said that Sam brought two phones and was texting on both of them the entire time,” the friend says. “Keith says it was the worst date he ever went on.” (Use of the word “date” has, by relevant parties, been disputed.)For rising figures who have formed genuine friendships with powerful gay industry leaders, success sometimes comes with a penalty: the assumption that it is borrowed, not earned. Brad, a gay industry leader, has long lived with rumors about his friendship with Peter Thiel—rumors that followed him even as his career advanced. “When I started working with Peter so long ago, people would be like, Oh, did you sleep with him? Blah blah blah.” The answer, he says, is no. And yet, “for some reason everyone felt perfectly comfortable asking me about it. Straight people were interested in it generally, but the people who were really fucking fascinated were other gay guys. Guys would be like: What does he have that I don’t have? So then they assume, Well, Peter must have thought you were cute.” (Thiel did not respond to requests for comment.)Still, it’s naive to insist that intimacy with power is without its advantages. When Altman’s former boyfriend, early Stripe employee Lachy Groom, raised a $250 million solo venture fund while still in his twenties, some observers read the achievement less as an anomaly of talent, I’m told, than as an artifact of access. But at the time of this raise, Groom had already launched two funds back to back, the second of which looked to target $100 million. So this interpretation, according to a gay investor close to both Groom and Altman, is not entirely fair: “When Lachy and Sam were dating, Sam was kind of famous, but not nearly as famous as he is now, and Lachy was a person in his own right,” the investor says. “I did give a reference to [an investor in Groom’s fund] saying, ‘Yes, he’s unproven as an investor, yes, he’s young. But he is in the network, and he is Sam’s ex-boyfriend.’ But Lachy didn’t date Sam to get these things.” (Groom declined to comment on the record, as did a representative for Altman.)Meanwhile, when straight men attempt to tap into the gay network, the gay investors chat amongst themselves. Mark, who hosts dinner parties and events for the gay tech community in San Francisco, says that he noticed one man constantly RSVPing to his events. “We don’t have a purity test,” he says, “but someone said that guy is definitely not gay, he just goes to the gay man events because he wants deal flow.” It isn’t like straight men are excluded per se, but they are not exactly a welcome addition to the world of gay capital. The joke, if a straight founder does show up, is: Just don’t tell anyone you’re straight.“I have seen straight men do untoward things,” says a gay investor. “There is a straight guy who is not important enough to be named who would pitch all the gay investors, and in one meeting at the VC partnership he was talking to a gay general partner who I know. And in the meeting, this guy put his hand on the GP’s leg under the table. It is so inappropriate. It became a running joke, like, not this guy again.”One person in particular has helped fuel the notion that being gay can benefit one’s career: Delian Asparouhov, the mischievous, 31-year-old cofounder of Varda Space Industries, who was once hired as Rabois’ chief of staff. Rabois, who helped Thiel start PayPal and was later a partner at Thiel’s venture firm, Founders Fund, was a subject of corporate scrutiny years earlier. While at Square, Rabois was accused of sexual harassment by a male colleague, an episode that ultimately ended with Rabois’ departure from the company. (After an internal investigation, the company backed Rabois.)In 2018, about 100 people attended Rabois’ wedding to Jacob Helberg, a former adviser at Palantir who currently serves as the US undersecretary of state for economic growth. The wedding was a multiday affair with a guest list that included many of the most important people in tech and culminated in a beachside wedding ceremony officiated by Sam Altman. (Rabois’ bad “date” with Altman resulted, apparently, in close friendship.)During the wedding, Asparouhov gave a toast, which was later recalled by Fred, a longtime gay tech leader who was in attendance. “Delian said something like, ‘I’m the intern that Keith hired, and I would wear short shorts and tank tops at Square.’” Fred says he was sitting at a table with two famous tech executives. “We just raised our eyebrows,” Fred continues. “It was so embarrassing that Delian would say that at someone’s wedding. I mean, here was Keith getting married to Jacob.” (Other wedding attendees claim not to remember the contents of the speech but say it sounds like Asparouhov.)Rumors of Asparouhov and Rabois’ dating lives have long traveled in industry circles, thanks in part to Asparouhov, who has fanned the flames online. (“Delian is like Gretchen Wieners,” explains Fred.) In 2022, a popular anonymous tech insider X account, Roon, tweeted that it was “crazy how venture capitalists have reinvented the Roman system of pederasty.” Asparouhov responded to the tweet almost immediately: “It only took a little gay and now I get to work on space factories,” he wrote. “Pretty reasonable trade.” He now says the tweet was “obviously a joke.”Is the gay tech mafia more myth than reality?Share your thoughts in the comment section below.But as Fred recounted, Asparouhov was known for wearing neon tank tops, short shorts, and mismatched shoes when he joined Square in 2012. “He would jump a lot—it was very odd,” says someone who worked at the company at that time. Others have similar recollections. OpenStore, the Miami-based company Rabois cofounded in 2021, which mostly shut down last year, seemed to be, according to John, who says he visited its offices, “almost like a harem, filled with jacked white men, all of them handsome and good-looking, straight and gay. People were wearing kind of inappropriate clothing: really short shorts and tight shirts even though the AC was blasting.” Rabois, when I ask him for a comment, denies this categorically. “Attire was quite standard for Florida,” he says. “And I doubt more than two of the 100-plus employees could be reasonably described as ‘jacked.’”Rabois has been known to take extravagant vacations—helicopter trips to Icelandic volcanoes, white-water rafting in Costa Rica. Exclusion can stir serious envy, as it did with one young gay tech consultant I speak with who says he has begun a kind of “micro-journalism” project to track the appearances of a couple of guys on Rabois’ Instagram. These are “low-level” workers, he says, who nonetheless are “always posting photos in St. Barts.” “Here I am doomscrolling on the A train, and I’m like, ‘How are these guys on a private jet?’”But how far back do these rumors really go? Has Silicon Valley always been semi-secretly, kinda-sorta gay? More than once, I’m told to connect with Joel, a gay man who works in tech and who spent a lot of time among the older in-group of powerful gay men in Silicon Valley, more than a decade ago. “So,” I say when he answers my call, “are you a member of the gay tech mafia?” He laughs. “Maybe someone thinks I’m in it, which is why you’re calling me.”When I ask Joel to explain how the gay tech mafia works, he tells me that it’s similar to people who “went to the same college or came from a similar background or a similar town.” And it indeed started, he says, with people like Rabois and Thiel, who, after they rose to power, “brought a lot of people along. Keith hired gays at Square, and Peter hired Mike [Solana] at Founders Fund. Then there was a cohort of Google gays that Marissa Mayer ran in 2010. And there is Sam, who is friends with Keith, and Sam was running in parallel, assembling other gays around him.”Joel tells me about the parties at the time—the exact specifics of which remain off the record. But they were, in summary, what you might expect. “There was lots of drinking that would turn into weird situations. Random people hooking up. Generally, there was a sexual tone.” But this was years ago. These types of parties, at least from what I’ve heard, have either disappeared or moved entirely underground. (“Once you get to the end of your reporting, you will find that the real story is much less explosive,” says Mark. “Like all these wild orgies: If you do find out where they are, please tell me, because I’d like to go.”)I tell Joel that I’ve heard from some young men in the tech industry who feel pressured to sleep around to get ahead. Was that true in his experience? “Mmmmm,” he says, and pauses. Then he bursts out laughing. “I mean, in all of this, there are weird gray areas. It can be very sexual. It is not all professional. A lot of people have dated or slept with each other.” He had experienced a kind of coercion firsthand. “I definitely felt pressured to do—not overtly illegal things. But they walked the line.” Joel is older now, and while he can see how someone might describe this as an abuse of power, he resists the framing. The exchange of sex and status may not be the reason these men rose so quickly, but it can be a factor—if only because sex, as he puts it, “makes people become closer rapidly.”As Silicon Valley has matured into the power center of the world, it has grown sharply cutthroat. Leverage is scarce, and ambition is often laced with a kind of ruthless opportunism. In gay circles, some feel the Valley resembles the old Hollywood casting couch. Many of the critics are rising gay entrepreneurs and investors themselves, for whom parts of the gay community seem steeped in the attitudes and values of the 1970s and ’80s. “There’s this feeling,” one observes, “that because there were years of historical oppressions only recently recognized, certain people think, ‘I can do this, or I deserve this, because no one will cancel me for it.’”ILLUSTRATION: SAM WHITNEY; GETTY IMAGESThis is a community that, as one young gay investor describes it, is “power-hungry, network-driven, and, at times, very horny.” The arrangement, he suggests, is tacitly understood by everyone involved: “Both sides know they are in the game and want something from each other. Which is fine, I guess, if you’re into that.” This is not, in his telling, the whole of the gay tech scene, most of which is a “lovely, amazing community that supports its people and their career progress.” But alongside that exists a sexual undercurrent—one that, he insists, is impossible to deny and especially pronounced in AI circles. “It’s like a gay nepo thing,” he says. “While it’s not explicitly for sexual favors, there is an element at work in the background. Like, you’re young and you’re hot and I’m down to hook up.”One gay man, Dean, describes moving through a professional world in which sexual suggestion flowed freely. Early on, it came from limited partners curious about his prospective fund; after he raised the fund, it came from founders seeking capital. In one instance, a potential limited partner proposed a meeting at his home. “He was like, ‘We don’t need to wear clothes, we can just sit around and talk about your fund in my hot tub.’” Dean frames these encounters as an irritation—ambient, expected, and largely inconsequential. “Sex is devalued in gay male culture,” he says. “Often, it’s just another piece of currency.”After Dean raised his fund, he was occasionally approached by young men, “founders looking for money who indicated they were open to whatever it takes to raise it.” At events geared toward LGBT founders, young men would ask to grab drinks one on one. Sometimes, they’d send nudes on Instagram. “Like ‘Hey …’ with a winky face. And ‘Do you like that?’ And I’d be like, ‘No, that’s actually inappropriate,’” he says. It’s not confined to Silicon Valley, he adds. Having left tech for a different industry, Dean has come to see the entanglement of sex, power, and ambition as a recurring feature of certain pockets of gay professional life.Another man who works in the queer tech space puts it this way: “There is an aspect of being queer and in business and in life and having relationships that can be frankly sexual and not sexual at the same time. You can turn off and do business with someone you were hooking up with yesterday.” Plus, he continues, there is the inescapable fact that much of gay male culture tends to be sexually charged. “Straight guys have the golf course. Gay guys have the orgy,” he says. “It doesn’t mean it’s problematic. It’s consensual, but it is a way we bond and connect.”Of the 31 gay men I spoke to for this story, nine tell me they experienced unwanted advances from other gay men in the industry. Some of these advances were mild but annoying: repeated invitations to soak in hot tubs or explore wine cellars. Others involved unwanted touches. One person, an up-and-coming gay investor, tells me that he believes that turning down a sexual advance from a senior colleague cost him a job. Multiple sources speak of “sex pests” who send unsolicited dick pics and make overt come-ons.“What demoralizes me in the conversations around the gays in tech in San Francisco is that none of this is entirely a secret,” says one gay investor who experienced an unwanted sexual advance. “People are aware this is an issue.” Another gay man who works in tech adds: “There is an element to this story that is a cautionary tale. You take a brilliant entrepreneur who has a great idea trying to make it in the world of venture capital. And then they have to put up with someone sending them dick pics and asking for an investment meeting. It shouldn’t be normalized. And right now, everything is so gray. Like, it’s our little thing, our little world. But it has a massive impact.”Again and again, gay men working in tech ask me: Why has this story never been written? The question somewhat answers itself. Unfair stereotypes about gay men persist, and why else would sources insist on pseudonyms? I am warned, more than once, to be careful, that figures in Silicon Valley are “vindictive.” Even as many consider this culture of sexual pressure a feature of Silicon Valley life, it is, as someone else tells me, “a true minefield” to write about.Got a Tip?If you have stories from inside the so-called gay tech mafia, you can send Zoë Bernard a confidential tip on Signal @zoebernard.26.Gerald knows the feeling. He’s a young gay man in San Francisco, described by acquaintances as a “quirky individual” and a “social puppeteer.” Over a call, Gerald lays out the reasons he has hesitated to talk about his time in tech. “This is a complex subject,” he says, “and I don’t think readers can draw the distinction between some bad men being gay and all gay men being bad. It can be a slippery slope into homophobia.”He won’t give his story to me. Not yet. But he does tell me he suspects that other stories, in the coming months, will surface. “People have a difficult time articulating power with nuance,” he says. “This is not just one story. There will be many.” From what he’s told me so far, and from everything else I’ve heard—the heartfelt, late-night confessions over the phone; the insights shared quietly and kept off the record; the admissions of dozens of funny, brilliant, young gay men competing for, yes, power and money and recognition, but also for love, romance, and a place to belong in the heart of San Francisco—I believe him.Update: 2/20/2025 4:30 PM EDT: WIRED has clarified the chronology of Lachy Groom's fund raises, including two raises that occurred prior to the $250 million raise in 2021.What Say You?Let us know what you think about this article in the comments below. Alternatively, you can submit a letter to the editor at mail@wired.com. Inside the Gay Tech Mafia No one can say exactly when, or if, gay men started running Silicon Valley. They seem to have dominated its upper ranks at least the past five years, maybe more. On platforms like X, the clues are there: whispers of private-island retreats, tech executives going “gay for clout,” and the suggestion that a “seed round” is not, strictly speaking, a financial term. It is an idea so taken for granted, in fact, that when I call up a well-connected hedge fund manager to ask his thoughts about what is sometimes referred to in industry circles as the “gay tech mafia,” he audibly yawns. “Of course,” he says. “This has always been the case.” It had been the case, the hedge funder says, back in 2012, when he was raising money from a venture capitalist whose office was staffed with dozens of “attractive, strong young men,” all of whom were “under 30” and looked as though they had freshly decamped from “the high school debate club.” “They were all sleeping with each other and starting companies,” he says. And it is absolutely the case now, he adds, when gay men are running influential companies in Silicon Valley and maintain entire social calendars with scarcely a straight man, much less a woman, in sight. “Of course the gay tech mafia exists,” he continues. “This is not some Illuminati conspiracy theory. And you do not have to be gay to join. They like straight guys who sleep with them even more.” Ever since I started covering Silicon Valley in 2017, I’ve heard variations of this rumor—that “gays,” as an AI founder named Emmett Chen-Ran has quipped, “run this joint.” On its face, a gay tech mafia seemed too dumb to warrant actual investigative inquiry. Sure, there were gay men in high places: Peter Thiel, Tim Cook, Sam Altman, Keith Rabois, the list went on. But the idea that they were operating some kind of shadowy cabal seemed born entirely of homophobia, the indulgence of which might play into the hands of conspiracy-minded conservatives like Laura Loomer, who, in 2024, tweeted that the “high tech VC world just seems to be one big, exploitative gay mafia.” Over time, though, the rumor refused to die, eventually curdling into something closer to conventional wisdom. Last spring, at a venture capitalist’s party in Southern California, a middle-aged investor complained to me at length about how he was struggling to raise his new fund. The problem, he explained, boiled down to discrimination. I took him in as he spoke. He had the uniform down cold: a white man with a crew cut, wearing a tasteless button-down stretched over mild prosperity, and a fluent conviction that AI was, thank god, the next big thing. He looked exactly like the sort of man Silicon Valley has been built to reward. And yet here he was, insisting that the system was rigged against him. “If I were gay, I wouldn’t be having any trouble,” he said. “That’s the whole thing with Silicon Valley these days. The only way to catch a break,” he claimed, “is if you’re gay.” Over the course of 2025, similar sentiments bubbled up on X, where Silicon Valley tech workers joked about offering “fractional vizier services to the gay elite.” Anonymous accounts hinted at an underworld of gay Silicon Valley power brokers who influenced and courted—“groomed”—aspiring entrepreneurs. At an AI conference in Los Angeles, an engineer casually referred to a top AI firm’s offices, more than once, as “twink town.” By the fall, speculation intensified, and then a photo appeared on X of a group of Y Combinator–backed founders crowded near a sauna with Garry Tan, the incubator’s president. The image seemed innocuous enough: a few young, nerdy men in swim trunks, squinting into the camera. But almost instantly, it set off a round of viral gossip about the peculiar intimacies of venture capital culture. Not long after, a founder from Germany, Joschua Sutee, posted a photo of himself and his male cofounders—apparently naked, swaddled in bedsheets—submitted as part of what seemed to be a Y Combinator application, a move that appeared designed to court a knowingly erotic male audience. “Here I come, @ycombinator,” the caption read. The notion that Y Combinator was grooming male entrepreneurs makes little sense—for lots of reasons, and for one in particular. “Garry is straight straight straight straight,” says a person who knows Tan. “But he believes in the benefits of the sauna.” When I ask Tan for a comment, he is blunt—some founders were over for dinner and asked to use his recently installed sauna and cold plunge. From there, Tan says, “rejects” of Y Combinator “manufactured this meme that it was somehow more than that.” And yet, similar rumors persisted and compounded, originating as often from outsiders (sometimes with dubious political motivations) as from insiders. When I call up my longtime industry sources to get their thoughts on the gay tech mafia, not only have they heard of it—they have highly specific notions of how it works. These are credible people who believe seemingly incredible things. One San Francisco investor tells me that he believes the Thiel Fellowship is a training ground for gay industry leaders. (When I run this notion past a couple of former Thiel Fellows, they tell me they met Thiel one time at a dinner, where he appeared “slightly bored,” says one of the fellows, a straight man. “I mean, I wish Peter tried to groom me.”) Meanwhile, people’s gaydars are practically overheating. I hear, more than once, that anyone in Silicon Valley who has achieved outsize success is probably gay. Isn’t it strange, one San Francisco–based venture capitalist muses, how a certain defense-tech executive achieved so much success at a relatively young age? “Isn’t he gay?” the VC asks. “He must be.” I tell him he is mistaken—the executive is married to a woman. “Sure,” he replies. “But have you ever seen them together?” Another entrepreneur who raised capital from two well-known gay investors tells me that he’s accustomed to fielding scrutiny about his sexual orientation. “People say I’m gay,” he says. “There’s always jokes. Like, ‘How’d you get the money, bro?’” Then there are the anonymous X accounts amplifying allegations of misconduct. Their posts are calibrated for attention: detailed enough to suggest insider knowledge of the Valley, vague enough to invite darker interpretations. I take the bait and, one afternoon in late November, spend nearly an hour texting one such account owner over Signal who agrees to speak to me only if I keep his handle secret. This person describes the Valley as a place known for “ecstasy, psychedelic fueled gay sex stuff.” Has he experienced any of it himself? No. But he knows people who have—people who are “pretty afraid” and “young af.” He won’t name names, won’t connect me to anyone, but he swears that any negative rumor I’ve heard about gay men in Silicon Valley is true. He suggests a conspiracy so sprawling it rivals QAnon and implicates the entire US government. He gives me vague reporting advice: “It should be easy to find. 2nd page of Google type thing.” Finally, frustrated by his evasiveness, I ask what he thinks will happen if he tells me what he knows. “I truly believe,” he says, “killed.” Then he offers a suggestion. The only way to expose this blockbuster of a tale is “project veritas style: Take a 20 year old dude, make an X acc[ount]. Send him to the right places in SF and you’ll break the story if you go deep enough.” The problem with conspiracy theories, even offensive ones, is that they are rarely wholly invented. They almost always arise from some fragment of truth, which imagination then contorts. The difficulty with this particular rumor is that, while I was unable to substantiate darker allegations, parts of the story still resonate. In conversations with 51 people—31 of them gay men, many of them influential investors and entrepreneurs—a portrait emerged of gay influence in Silicon Valley that is intricate, layered, and often contradictory. It is a world in which power, desire, and ambition interweave in ways both visible and unseen, a world that is, in some ways, far richer—and more complicated—than the rumors themselves suggest. Most of the people who speak to me for this story do so on the condition that their names be kept confidential. Some of it is just garden-variety caution. “It may not be wise for me to be talking to a reporter describing all these parties,” says one, “because people would be like, Geez, why would we invite you?” Other excuses are murkier: “It’s not so safe to speak about this in too much detail,” says a founder who works in AI. “Anyone involved is an operator or a VC, and it might lead people to wonder about who is getting advantages.” Amid the deflections and whispers, though, there seems to be an unmistakable truth: Gay men are rising. “The gays who work in tech are succeeding vastly,” an angel investor, who is a gay man, tells me. “There’s the founder group of gays who all hang out with each other, because the gays always cluster together. By virtue of that, they become friends and vacation together.” Even more importantly: “They support each other, whether that’s to hire someone or angel invest in their companies or lead their funding rounds.” Some of these networks have begun to spill into public view. There is a Substack called Friend Of, written by Jack Randall, who formerly worked in communications at Robinhood, that chronicles gay ascendence into the centers of power. “We run the tech mafia (see Apple, OpenAI),” Randall writes. “We hold top government posts (see the Treasury Secretary). We anchor primetime news and the NYE Ball Drop. Our dating app’s stock outperforms its straight peers. And in the US, gay men are, on average, better educated and wealthier than the general population.” A new company called Sector aims to formalize this network. Founded by Brian Tran, a former designer in residence at Kleiner Perkins, Sector has a website that features photos of handsome men on beaches and at dimly lit dinners. One member describes it to me as a curated network where introductions unfold between well-heeled gay men with shared interests. “It’s up to you to decide,” the member tells me. “Is this professional, is it platonic, or is it something romantic?” In an interview with Randall, Tran said, “I think we could displace Grindr in the coming years.” On any given week in San Francisco, Partiful invites float around the community. If there is a “regular Halloween party, the gays will have their own Halloween party, and Sam Altman will be there,” says Jayden Clark, a straight podcaster who hosts a tech culture podcast and was not invited to the gay Halloween party. (Altman attended dressed as Spider-Man, a nod to Andrew Garfield, who played the superhero and has since been cast as Altman in an upcoming film.) I hear of not one but two White Lotus–themed gay tech parties, both equally extravagant. “Girls are not present,” says that same angel investor. “They are just not there.” There is also a “Gay VC Mafia” group chat that is, as one member describes it, “60 percent business” and “40 percent hee hee ha ha” about “classically gay topics.” With a steady churn of tech events aimed at gay men, the social incentives stack up fast. Connections blur—“professional, physical, or sometimes romantic,” as an AI founder puts it. The pull of this bubble is so strong, he continues, that it’s “an uphill battle to socialize with straight people.” None of this is necessarily unfamiliar in the clubby world of Silicon Valley, where the smart, successful, and wildly rich have always formed in-groups. There’s the so-called OpenAI mafia and the Airbnb mafia, and before those the PayPal mafia—alumni of moonshot companies who bankroll the next wave of startups. So some of what reads as advantage is, on closer inspection, structural and unremarkable. San Francisco combines two things in unusual density: one of the country’s largest gay populations and a tech industry that has reshaped global power. “For sure, gay men are overrepresented and have had an unbelievable run in the Bay Area,” says Mark, another gay entrepreneur who runs an AI startup. “In a city that has the most venture capital in the world, it isn’t surprising that this money is going directly to gay men.” (This perception, for what it’s worth, runs counter to statistics: Between 2000 and 2022, the years for which data is available, only 0.5 percent of startup venture funding went to LGBTQ+ founders.) “It’s not that there is some kind of gay mafia,” Mark continues. “But if I told you who are my friends that I want to invest in, they happen to be gays. Who are the people without kids who can grind away on the weekends? It’s the gays.” (Sources identified in this story by a first name only, like Mark, preferred the use of pseudonyms.) Imagine this, Mark says: You are a young, nerdy, closeted gay man. You grow up never quite fitting in. Your parents start asking questions. Why don’t you have a girlfriend? You tell them you’re too busy for a relationship. Eventually, you move to San Francisco, a city that, as one person puts it, is like “Disneyland for gay men.” Your world opens up. You meet other people like you—men who are openly out, many for the first time in their lives. These men happen to be working at influential companies. They are building technology that is astonishing. And slowly it dawns on you: Maybe you, too—a person who has spent a lifetime overlooked and underestimated—can build something extraordinary. “Gays feel,” Mark says, “that they have something to prove.” This is, more or less, the nature of how power and money have moved throughout networks since the dawn of time. And gay networks seem naturally aligned to the dynamics of venture funding, where established wealth meets emerging talent. “One of the key things to realize is that gays are different than straights in many different ways,” says a longtime gay venture capitalist. “Gays are cross-generational.” While straight people tend to spend more time with people their own age, “that is not true with gay men. I can hang out with someone at an event who is 18 years old, and Peter [Thiel] might also be there.” Just because you are gay and work in tech does not necessarily mean you are part of the so-called gay tech mafia. Much of the queer spectrum is conspicuously absent from events geared toward gay founders. “There are barriers within the community,” says Danny Gray, a leader at Out Professionals, a networking organization for LGBTQ+ businesspeople. “Cis gay men are the biggest gay group within the acronym, and it is much harder for other letters.” Lesbians tend to be sidelined; when I ask the hyperconnected tech journalist Kara Swisher about the gay tech mafia, she says she wasn’t aware there was one. And even if you are a gay man, inclusion is not necessarily guaranteed. “I’ve found it hard to break into this group myself,” one gay investor tells me. “I probably need to lose 20 pounds.” It may be that what outsiders perceive as the gay tech mafia is not gay people working in tech, or even, broadly speaking, gay men, but a small, self-selecting group with shared politics and sensibilities. They are assumed to prize aesthetics and the masculine physique, scorn identity politics, reject DEI in favor of MEI—“merit, excellence, and intelligence”—and lean right-wing, if not MAGA. I’ve heard straight entrepreneurs describe them as “the Greco-Roman gays,” part of “an insular, hypermasculine culture” in which “women are seen as totally redundant and completely unnecessary.” (A woman who once worked for a gay Republican startup founder describes it like this: “You get about the same amount of misogyny, but not the sexual harassment. So that’s nice.”) Where, then, might these almighty power gays be observed in their natural habitat? This is one of the guiding questions in my research, the answer to which perpetually evades me. When I ask a gay investor if perhaps I can attend one of these parties as a fly-on-the-wall observer, he tells me no, because it would be weird, given that I am—unfortunately for the purposes of this story—a woman. “People will be like, ‘Is that your sister?’” he says. I float an idea past my editor that I attend a party disguised as a man. Perhaps, I suggest, we should discuss the budget for my makeover? While not entirely disinterested in the idea, my editor offers another suggestion, that he—a gay man—come along as a kind of chaperone, “for safety” purposes. Neither of us revisits the idea. There is one place, though, that is mentioned again and again: Barry’s, the fitness bootcamp, which has become a gay mecca, thanks in part to the high-profile investor Keith Rabois, who has long been one of its most avid devotees, to the point of teaching occasional classes. And one Barry’s in particular keeps coming up: “The Barry’s in the Castro is ranked supreme,” says that same gay angel investor. “It is all guys, all gays, and everyone has abs.” (“From what I’ve learned working here, gay men do love to work out,” confirms a female employee at the Castro Barry’s.) The fact is, most people seem eager to talk about this, no deceptions on my part necessary. Many of them reply almost immediately to my vague inquiries. Even more surprising is their willingness to talk at length. Calls often run for hours, blending measured observations about life in a masculine-dominated culture with tours through the most salacious industry intrigue of my entire career. There can be an edge to the gossip, though—an implication that one of the most reliable paths to power in Silicon Valley may run through the bedroom. Some men are eager to hop on a call to ask what I may or may not have already heard about them. One gay founder tells me how a rumor has been circulating (a version of which I have, in fact, heard) that he and his husband slept with a gay investor in exchange for a down payment on their home. “Do people really think,” he wonders, “that we can’t afford a condo?” Many have, at some point or another, been suspected of romantic involvement, even if they’ve never been in the same room together. When I call up Ben Ling, an investor and early Google employee, to ask about long-standing speculation that he might be a good match for Tim Cook—a pairing intriguing enough to be referenced in The Atlantic—he laughs. “People make up these rumors because they have nothing better to do,” he says. “Tim Cook does not know who I am.” And while it is true that at least some of these men know and see each other socially, these meetups do not reliably lead to romance. A friend of Rabois tells me that Rabois likes to tell a story of the time, years earlier, when he invited Sam Altman as his plus-one to an event. “He said that Sam brought two phones and was texting on both of them the entire time,” the friend says. “Keith says it was the worst date he ever went on.” (Use of the word “date” has, by relevant parties, been disputed.) For rising figures who have formed genuine friendships with powerful gay industry leaders, success sometimes comes with a penalty: the assumption that it is borrowed, not earned. Brad, a gay industry leader, has long lived with rumors about his friendship with Peter Thiel—rumors that followed him even as his career advanced. “When I started working with Peter so long ago, people would be like, Oh, did you sleep with him? Blah blah blah.” The answer, he says, is no. And yet, “for some reason everyone felt perfectly comfortable asking me about it. Straight people were interested in it generally, but the people who were really fucking fascinated were other gay guys. Guys would be like: What does he have that I don’t have? So then they assume, Well, Peter must have thought you were cute.” (Thiel did not respond to requests for comment.) Still, it’s naive to insist that intimacy with power is without its advantages. When Altman’s former boyfriend, early Stripe employee Lachy Groom, raised a $250 million solo venture fund while still in his twenties, some observers read the achievement less as an anomaly of talent, I’m told, than as an artifact of access. But at the time of this raise, Groom had already launched two funds back to back, the second of which looked to target $100 million. So this interpretation, according to a gay investor close to both Groom and Altman, is not entirely fair: “When Lachy and Sam were dating, Sam was kind of famous, but not nearly as famous as he is now, and Lachy was a person in his own right,” the investor says. “I did give a reference to [an investor in Groom’s fund] saying, ‘Yes, he’s unproven as an investor, yes, he’s young. But he is in the network, and he is Sam’s ex-boyfriend.’ But Lachy didn’t date Sam to get these things.” (Groom declined to comment on the record, as did a representative for Altman.) Meanwhile, when straight men attempt to tap into the gay network, the gay investors chat amongst themselves. Mark, who hosts dinner parties and events for the gay tech community in San Francisco, says that he noticed one man constantly RSVPing to his events. “We don’t have a purity test,” he says, “but someone said that guy is definitely not gay, he just goes to the gay man events because he wants deal flow.” It isn’t like straight men are excluded per se, but they are not exactly a welcome addition to the world of gay capital. The joke, if a straight founder does show up, is: Just don’t tell anyone you’re straight. “I have seen straight men do untoward things,” says a gay investor. “There is a straight guy who is not important enough to be named who would pitch all the gay investors, and in one meeting at the VC partnership he was talking to a gay general partner who I know. And in the meeting, this guy put his hand on the GP’s leg under the table. It is so inappropriate. It became a running joke, like, not this guy again.” One person in particular has helped fuel the notion that being gay can benefit one’s career: Delian Asparouhov, the mischievous, 31-year-old cofounder of Varda Space Industries, who was once hired as Rabois’ chief of staff. Rabois, who helped Thiel start PayPal and was later a partner at Thiel’s venture firm, Founders Fund, was a subject of corporate scrutiny years earlier. While at Square, Rabois was accused of sexual harassment by a male colleague, an episode that ultimately ended with Rabois’ departure from the company. (After an internal investigation, the company backed Rabois.) In 2018, about 100 people attended Rabois’ wedding to Jacob Helberg, a former adviser at Palantir who currently serves as the US undersecretary of state for economic growth. The wedding was a multiday affair with a guest list that included many of the most important people in tech and culminated in a beachside wedding ceremony officiated by Sam Altman. (Rabois’ bad “date” with Altman resulted, apparently, in close friendship.) During the wedding, Asparouhov gave a toast, which was later recalled by Fred, a longtime gay tech leader who was in attendance. “Delian said something like, ‘I’m the intern that Keith hired, and I would wear short shorts and tank tops at Square.’” Fred says he was sitting at a table with two famous tech executives. “We just raised our eyebrows,” Fred continues. “It was so embarrassing that Delian would say that at someone’s wedding. I mean, here was Keith getting married to Jacob.” (Other wedding attendees claim not to remember the contents of the speech but say it sounds like Asparouhov.) Rumors of Asparouhov and Rabois’ dating lives have long traveled in industry circles, thanks in part to Asparouhov, who has fanned the flames online. (“Delian is like Gretchen Wieners,” explains Fred.) In 2022, a popular anonymous tech insider X account, Roon, tweeted that it was “crazy how venture capitalists have reinvented the Roman system of pederasty.” Asparouhov responded to the tweet almost immediately: “It only took a little gay and now I get to work on space factories,” he wrote. “Pretty reasonable trade.” He now says the tweet was “obviously a joke.” But as Fred recounted, Asparouhov was known for wearing neon tank tops, short shorts, and mismatched shoes when he joined Square in 2012. “He would jump a lot—it was very odd,” says someone who worked at the company at that time. Others have similar recollections. OpenStore, the Miami-based company Rabois cofounded in 2021, which mostly shut down last year, seemed to be, according to John, who says he visited its offices, “almost like a harem, filled with jacked white men, all of them handsome and good-looking, straight and gay. People were wearing kind of inappropriate clothing: really short shorts and tight shirts even though the AC was blasting.” Rabois, when I ask him for a comment, denies this categorically. “Attire was quite standard for Florida,” he says. “And I doubt more than two of the 100-plus employees could be reasonably described as ‘jacked.’” Rabois has been known to take extravagant vacations—helicopter trips to Icelandic volcanoes, white-water rafting in Costa Rica. Exclusion can stir serious envy, as it did with one young gay tech consultant I speak with who says he has begun a kind of “micro-journalism” project to track the appearances of a couple of guys on Rabois’ Instagram. These are “low-level” workers, he says, who nonetheless are “always posting photos in St. Barts.” “Here I am doomscrolling on the A train, and I’m like, ‘How are these guys on a private jet?’” But how far back do these rumors really go? Has Silicon Valley always been semi-secretly, kinda-sorta gay? More than once, I’m told to connect with Joel, a gay man who works in tech and who spent a lot of time among the older in-group of powerful gay men in Silicon Valley, more than a decade ago. “So,” I say when he answers my call, “are you a member of the gay tech mafia?” He laughs. “Maybe someone thinks I’m in it, which is why you’re calling me.” When I ask Joel to explain how the gay tech mafia works, he tells me that it’s similar to people who “went to the same college or came from a similar background or a similar town.” And it indeed started, he says, with people like Rabois and Thiel, who, after they rose to power, “brought a lot of people along. Keith hired gays at Square, and Peter hired Mike [Solana] at Founders Fund. Then there was a cohort of Google gays that Marissa Mayer ran in 2010. And there is Sam, who is friends with Keith, and Sam was running in parallel, assembling other gays around him.” Joel tells me about the parties at the time—the exact specifics of which remain off the record. But they were, in summary, what you might expect. “There was lots of drinking that would turn into weird situations. Random people hooking up. Generally, there was a sexual tone.” But this was years ago. These types of parties, at least from what I’ve heard, have either disappeared or moved entirely underground. (“Once you get to the end of your reporting, you will find that the real story is much less explosive,” says Mark. “Like all these wild orgies: If you do find out where they are, please tell me, because I’d like to go.”) I tell Joel that I’ve heard from some young men in the tech industry who feel pressured to sleep around to get ahead. Was that true in his experience? “Mmmmm,” he says, and pauses. Then he bursts out laughing. “I mean, in all of this, there are weird gray areas. It can be very sexual. It is not all professional. A lot of people have dated or slept with each other.” He had experienced a kind of coercion firsthand. “I definitely felt pressured to do—not overtly illegal things. But they walked the line.” Joel is older now, and while he can see how someone might describe this as an abuse of power, he resists the framing. The exchange of sex and status may not be the reason these men rose so quickly, but it can be a factor—if only because sex, as he puts it, “makes people become closer rapidly.” As Silicon Valley has matured into the power center of the world, it has grown sharply cutthroat. Leverage is scarce, and ambition is often laced with a kind of ruthless opportunism. In gay circles, some feel the Valley resembles the old Hollywood casting couch. Many of the critics are rising gay entrepreneurs and investors themselves, for whom parts of the gay community seem steeped in the attitudes and values of the 1970s and ’80s. “There’s this feeling,” one observes, “that because there were years of historical oppressions only recently recognized, certain people think, ‘I can do this, or I deserve this, because no one will cancel me for it.’” This is a community that, as one young gay investor describes it, is “power-hungry, network-driven, and, at times, very horny.” The arrangement, he suggests, is tacitly understood by everyone involved: “Both sides know they are in the game and want something from each other. Which is fine, I guess, if you’re into that.” This is not, in his telling, the whole of the gay tech scene, most of which is a “lovely, amazing community that supports its people and their career progress.” But alongside that exists a sexual undercurrent—one that, he insists, is impossible to deny and especially pronounced in AI circles. “It’s like a gay nepo thing,” he says. “While it’s not explicitly for sexual favors, there is an element at work in the background. Like, you’re young and you’re hot and I’m down to hook up.” One gay man, Dean, describes moving through a professional world in which sexual suggestion flowed freely. Early on, it came from limited partners curious about his prospective fund; after he raised the fund, it came from founders seeking capital. In one instance, a potential limited partner proposed a meeting at his home. “He was like, ‘We don’t need to wear clothes, we can just sit around and talk about your fund in my hot tub.’” Dean frames these encounters as an irritation—ambient, expected, and largely inconsequential. “Sex is devalued in gay male culture,” he says. “Often, it’s just another piece of currency.” After Dean raised his fund, he was occasionally approached by young men, “founders looking for money who indicated they were open to whatever it takes to raise it.” At events geared toward LGBT founders, young men would ask to grab drinks one on one. Sometimes, they’d send nudes on Instagram. “Like ‘Hey …’ with a winky face. And ‘Do you like that?’ And I’d be like, ‘No, that’s actually inappropriate,’” he says. It’s not confined to Silicon Valley, he adds. Having left tech for a different industry, Dean has come to see the entanglement of sex, power, and ambition as a recurring feature of certain pockets of gay professional life. Another man who works in the queer tech space puts it this way: “There is an aspect of being queer and in business and in life and having relationships that can be frankly sexual and not sexual at the same time. You can turn off and do business with someone you were hooking up with yesterday.” Plus, he continues, there is the inescapable fact that much of gay male culture tends to be sexually charged. “Straight guys have the golf course. Gay guys have the orgy,” he says. “It doesn’t mean it’s problematic. It’s consensual, but it is a way we bond and connect.” Of the 31 gay men I spoke to for this story, nine tell me they experienced unwanted advances from other gay men in the industry. Some of these advances were mild but annoying: repeated invitations to soak in hot tubs or explore wine cellars. Others involved unwanted touches. One person, an up-and-coming gay investor, tells me that he believes that turning down a sexual advance from a senior colleague cost him a job. Multiple sources speak of “sex pests” who send unsolicited dick pics and make overt come-ons. “What demoralizes me in the conversations around the gays in tech in San Francisco is that none of this is entirely a secret,” says one gay investor who experienced an unwanted sexual advance. “People are aware this is an issue.” Another gay man who works in tech adds: “There is an element to this story that is a cautionary tale. You take a brilliant entrepreneur who has a great idea trying to make it in the world of venture capital. And then they have to put up with someone sending them dick pics and asking for an investment meeting. It shouldn’t be normalized. And right now, everything is so gray. Like, it’s our little thing, our little world. But it has a massive impact.” Again and again, gay men working in tech ask me: Why has this story never been written? The question somewhat answers itself. Unfair stereotypes about gay men persist, and why else would sources insist on pseudonyms? I am warned, more than once, to be careful, that figures in Silicon Valley are “vindictive.” Even as many consider this culture of sexual pressure a feature of Silicon Valley life, it is, as someone else tells me, “a true minefield” to write about. Gerald knows the feeling. He’s a young gay man in San Francisco, described by acquaintances as a “quirky individual” and a “social puppeteer.” Over a call, Gerald lays out the reasons he has hesitated to talk about his time in tech. “This is a complex subject,” he says, “and I don’t think readers can draw the distinction between some bad men being gay and all gay men being bad. It can be a slippery slope into homophobia.” He won’t give his story to me. Not yet. But he does tell me he suspects that other stories, in the coming months, will surface. “People have a difficult time articulating power with nuance,” he says. “This is not just one story. There will be many.” From what he’s told me so far, and from everything else I’ve heard—the heartfelt, late-night confessions over the phone; the insights shared quietly and kept off the record; the admissions of dozens of funny, brilliant, young gay men competing for, yes, power and money and recognition, but also for love, romance, and a place to belong in the heart of San Francisco—I believe him. Update: 2/20/2025 4:30 PM EDT: WIRED has clarified the chronology of Lachy Groom's fund raises, including two raises that occurred prior to the $250 million raise in 2021. What Say You?Let us know what you think about this article in the comments below. Alternatively, you can submit a letter to the editor at mail@wired.com. Comments You Might Also Like In your inbox: Sign up for our new Tracker: ICE newsletter I loved my OpenClaw AI agent—until it turned on me Big Story: Health workers are quitting over ICE assignments Google’s AI Overviews can scam you—here’s how to stay safe Submit your questions: The hype, reality, and future of EVs © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://www.mako.co.il/24tv] | [TOKENS: 1662]
ערוץ 24 פבלו רוזנברג: "חשבתי שזה הדבר הכי הזוי שקרה"הזמר משתף על המעבר למסך הגדול "לא מזמינה אותם לפרמיירות, היו להם משלהם"משי קלינשטיין על ההורים רמי וריטה אלישע בנאי: "יש דברים שעדיף להתכחש אליהם"הזמר והשחקן על הפרשי הגובה עם ארוסתו שבועות בפתח: מתכון ללזניה ופיצה לארוחת החג"מתי אוכלים?" עם המאכלים שישלימו את הסעודה שילוב בלתי נשכח: דג טברייני ופיתה דרוזית"מתי אוכלים?" במתכון לארוחה מעולה תוצרת בית תבשיל טופו מפתה: מתכון לארוחה צמחוניתוגם: מג'דרה קינואה עם עדשים שחורות "אנשים מתגעגעים למה מה שהיה פעם, לשכונות"סוד ההצלחה של הסדרה "שנות ה-90" הסרט החדש בכיכובם של רוני קובן ונועה קולריצירתו של הבמאי דני רוזנברג מגיעה לקולנוע "תמיד היה קשר לסרטן, מההצגה הראשונה לאחרונה"גידי גוב על המחזה שכתבה אשתו והפך לסרט יפה בוורוד: מוס טעים שמשגע את העינייםעם שוקולד לבן ורפאלו, לא הצלחנו להפסיק לזלול עם כלי אחד: מתכון לעוגת גזר ממכרתכך תכינו במהירות עוגה טעימה ומושלמת ללקק את האצבעות: קרם ברולה פירות יער הגרסה של אור שפיץ לקינוח הקלאסי פשוט נפלאה היסטרי: מתכון מטריף לסמוסה פטריותתתכוננו לפתוח כפתור, כי הטעם פשוט משגע לוהט: מתכון היסטרי לפרגיות בקארינמאס מהעוף השגרתי? חכו שתטעמו את המנה הזאת מהודו באהבה: מתכון נהדר לפאלאק פנירמנה קלאסית, אהובה - ובעיקר עוצרת נשימה מתי משודרות התכניות האהובות עליכם? היכנסו לגלות הכלה של "חתונה ממבט ראשון" רוצה זוגיות הורידו את הטופס, מלאו אותו ושלחו אלינו לאיזה מקום הגיע "ג'וני" במצעד הקריוקי?אלה השירים האהובים ביותר החודש אור שפיץ: "המדינה שלנו זקוקה לוורוד"אור שפיץ מספרת על ההתמודדות עם הסגר השני רן דנקר מגשים חלום וגם הוא קיבל תכנית אירוחהכירו את הלייט נייט החדש והפרוע של ערוץ 24 מגי טביבי: "אף אחד לא מכין אותך לפרסום"מגישת "ד"ש עם שיר" חוזרת ללימודי המשפטים זה מה שגרם לדנה זרמון להתבלבל באמצע צילומיםהפספוסים בתוכנית החדשה של כוכבת האינסטגרם
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-143] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_ref-AnnualReport2016_219-0] | [TOKENS: 8626]
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833
========================================
[SOURCE: https://news.ycombinator.com/item?id=47094192] | [TOKENS: 5087]
One security checking tool that has genuinely impressed me recently is CodeQL. If you’re using GitHub, you can run this as part of GitHub Advanced Security.Unlike those naïve tools, CodeQL seems to perform a real tracing analysis through the code, so its report doesn’t just say you have user-provided data being used dangerously, it shows you a complete, step-by-step path through the code that connects the input to the dangerous usage. This provides useful, actionable information to assess and fix real vulnerabilities, and it is inherently resistant to false positives.Presumably there is still a possibility of false negatives with this approach, particularly with more dynamic languages like Python where you could surely write code that is obfuscated enough to avoid detection by the tracing analysis. However, most of us don’t intentionally do that, and it’s still useful to find the rest of the issues even if the results aren’t perfect and 100% complete. Unlike those naïve tools, CodeQL seems to perform a real tracing analysis through the code, so its report doesn’t just say you have user-provided data being used dangerously, it shows you a complete, step-by-step path through the code that connects the input to the dangerous usage. This provides useful, actionable information to assess and fix real vulnerabilities, and it is inherently resistant to false positives.Presumably there is still a possibility of false negatives with this approach, particularly with more dynamic languages like Python where you could surely write code that is obfuscated enough to avoid detection by the tracing analysis. However, most of us don’t intentionally do that, and it’s still useful to find the rest of the issues even if the results aren’t perfect and 100% complete. Presumably there is still a possibility of false negatives with this approach, particularly with more dynamic languages like Python where you could surely write code that is obfuscated enough to avoid detection by the tracing analysis. However, most of us don’t intentionally do that, and it’s still useful to find the rest of the issues even if the results aren’t perfect and 100% complete. reply By Rice's Theorem, I somehow doubt that. reply reply reply reply It's just a silly historical artifact that we treat DoS as special, imo. reply If the system is configured to "fail open", and it's something validating access (say anti-fraud), then the DoS becomes a fraud hole and profitable to exploit. Once discovered, this runs away _really_ quickly.Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"Then, "what happens when people find out I pay out on shakedowns?" Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"Then, "what happens when people find out I pay out on shakedowns?" Then, "what happens when people find out I pay out on shakedowns?" reply reply reply The problem here isn't the DoS, it's the fail open design. reply reply reply > Then, "what happens when people find out I pay out on shakedowns?"What do you mean? You pay to someone else than who did the DoS. You pay your way out of a DoS by throwing more resources at the problem, both in raw capacity and in network blocking capabilities. So how is that incentivising the attacker? Or did you mean some literal blackmailing?? What do you mean? You pay to someone else than who did the DoS. You pay your way out of a DoS by throwing more resources at the problem, both in raw capacity and in network blocking capabilities. So how is that incentivising the attacker? Or did you mean some literal blackmailing?? reply reply reply Security team cannot explain attach surface. In the end it is binary. Fix it or take the blame reply DoS is distinct because it's only considered a "security" issue due to arbitrary conversations that happened decades ago. There's simply not a good justification today for it. If you care about DoS, you care about almost every bug, and this is something for your team to consider for availability.That is distinct from, say, remote code execution, which not only encompasses DoS but is radically more powerful. I think it's entirely reasonable to say "RCE is wroth calling out as a particularly powerful capability".I suppose I would put it this way. An API has various guarantees. Some of those guarantees are on "won't crash", or "terminates eventually", but that's actually insanely uncommon and not standard, therefor DoS is sort of pointless. Some of those guarantees are "won't let unauthorized users log in" or "won't give arbitrary code execution", which are guarantees we kind of just want to take for granted because they're so insanely important to the vast majority of users.I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept. That is distinct from, say, remote code execution, which not only encompasses DoS but is radically more powerful. I think it's entirely reasonable to say "RCE is wroth calling out as a particularly powerful capability".I suppose I would put it this way. An API has various guarantees. Some of those guarantees are on "won't crash", or "terminates eventually", but that's actually insanely uncommon and not standard, therefor DoS is sort of pointless. Some of those guarantees are "won't let unauthorized users log in" or "won't give arbitrary code execution", which are guarantees we kind of just want to take for granted because they're so insanely important to the vast majority of users.I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept. I suppose I would put it this way. An API has various guarantees. Some of those guarantees are on "won't crash", or "terminates eventually", but that's actually insanely uncommon and not standard, therefor DoS is sort of pointless. Some of those guarantees are "won't let unauthorized users log in" or "won't give arbitrary code execution", which are guarantees we kind of just want to take for granted because they're so insanely important to the vast majority of users.I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept. I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept. There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept. reply But at the same time i don't know. Pre-cloudflare bringing cheap ddos mitigation to the masses, i suspect most website operators would have preferred to be subject to an xss attack over a DoS. At least xss has a viable fix path (of course volumetric dos is a different beast than cve type dos vulns) reply We have decades of history of memory corruption bugs that were initially thought to only result in a DoS, that with a little bit of work on the part of exploit developers have turned into reliable RCE. reply Strongly disagree. While it might not matter much in some / even many domains, it absolutely can be mission critical. Examples are: Guidance and control systems in vehicles and airplanes, industrial processes which need to run uninterrupted, critical infrastructure and medicine / health care. reply I can produce a web server that prints hello world and if you send it enough traffic it will crash. If can put user input into a regex and the response time might go up by 1ms and noone will say its suddenly a valid cve.Then someone will demonstrate that with a 1mb input string it takes 4ms to respond and claim they've learnt a cve for it. I disagree. If you simply use Web pack youve probably seen a dozen of these where the vulnerable input was inside the Web pack.config.json file. The whole category should go in the bin. Then someone will demonstrate that with a 1mb input string it takes 4ms to respond and claim they've learnt a cve for it. I disagree. If you simply use Web pack youve probably seen a dozen of these where the vulnerable input was inside the Web pack.config.json file. The whole category should go in the bin. reply But if we no longer classed DOSes as vulnerabilities they might reply reply CVEs are helpful for describing the local property of a vulnerability. DOS just isn't interesting in that regard because it's only a security property if you have a very specific threat model, and your threat model isn't that localized (because it's your threat model). That's totally different from RCE, which is virtually always a security property regardless of threat model (unless your system is, say, "aws lambda" where that's the whole point). It's just a total reversal. reply reply reply reply If availability isn’t part of CIA then a literal brick fulfills the requirements of security and the entire practice of secure systems is pointless. reply reply That and it can't understand that a tool that runs as the user on their laptop really doesn't need to sanitise the inputs when it's generating a command. If the user wanted to execute the command they could without having to obfuscate it sufficient to get through the tool. Nope, gotta waste everyone's time running sanitisation methods. Or just ignore the stupid code review tool. reply reply We also suffer from this. Although in some cases it's due to a Dev dependency. It's crazy how much noise it adds specifically from ReDoS... reply reply reply reply reply reply reply reply reply I made a GitHub action that alerts if a PR adds a vulnerable call, which I think pairs nicely with the advice to only actually fix vulnerable calls.https://github.com/imjasonh/govulncheck-actionYou can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases. https://github.com/imjasonh/govulncheck-actionYou can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases. You can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases. Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases. reply reply reply reply If your test suite is up to the task you’ll find defects in new updates every now and then, but for me this has even led to some open source contributions, engaging with our dependencies’ maintainers and so on. So I think overall it promotes good practices even though it can be a bit annoying at times. reply reply reply https://docs.github.com/en/code-security/reference/supply-ch... reply You can have Dependabot enabled, but turn off automatic PRs. You can then manually generate a PR for an auto-fixable issue if you want, or just do the fixes yourself and watch the issue number shrink. reply : https://github.com/refined-github/refined-github reply It's good optimization advice, if you have the time, or suffer enough from the described pain points, to apply it. reply What I do instead: monthly calendar reminder, run npm audit, update things that actually matter (security patches, breaking bugs), ignore patch bumps on stable deps. The goal isn't "every dep is always current" - it's "nothing in production has a known vulnerability". Very different targets. reply The fundamental problem with Dependabot is that it treats dependency management as a security problem when it's actually a maintenance problem. A vulnerability in a function you never call is not a security issue — it's noise. But Dependabot can't distinguish the two because it operates at the version level, not the call graph level.For Python projects I've found pip-audit with the --desc flag more useful than Dependabot. It's still version-based, but at least it doesn't create PRs that break your CI at 3am. The real solution is better static analysis that understands reachability, but until that exists for every ecosystem, turning off the noisy tools and doing manual quarterly audits might actually be more secure in practice — because you'll actually read the results instead of auto-merging them. For Python projects I've found pip-audit with the --desc flag more useful than Dependabot. It's still version-based, but at least it doesn't create PRs that break your CI at 3am. The real solution is better static analysis that understands reachability, but until that exists for every ecosystem, turning off the noisy tools and doing manual quarterly audits might actually be more secure in practice — because you'll actually read the results instead of auto-merging them. reply reply But I don't quite understand what Dependabot is doing for Go specifically. The vulnerability goes away without source code changes if the dependency is updated from version 1.1.0 to 1.1.1. So anyone building the software (producing an application binary) could just do that, and the intermediate packages would not have to change at all. But it doesn't seem like the standard Go toolchain automates this. reply reply reply reply reply https://github.com/google/closure-compiler reply reply reply reply Are there any tools for handling these kind of CVEs contextually? (Besides migrating all our base images to chainguard/docker hardened images etc) reply reply > These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score, allegedly based on the breakage the update is causing in the ecosystem.Where did the CVSS score come from exactly? Does dependabot generate CVEs automatically? Where did the CVSS score come from exactly? Does dependabot generate CVEs automatically? reply reply Separately, I love the idea of the `geomys/sandboxed-step` action, but I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones. I'll give sandboxed-step a look, sounds like it would be a nice thing to keep in my toolbox. reply Yeah, same. FWIW, geomys/sandboxed-step goes out of its way to use the GitHub Immutable Releases to make the git tag hopefully actually immutable. reply reply how about `cargo-audit`? reply reply reply reply For security vulnerabilities, I argue that updating might not be enough! What if your users’ data was compromised? What if your keys should be considered exposed? But the only way to have the bandwidth to do proper triage is by first minimizing false positives. reply reply reply There never could be, these languages are simply too dynamic. reply (Source: I maintain pip-audit, where this has been a long-standing feature request. We’re still mostly in a place of lacking good metadata from vulnerability feeds to enable it.) reply reply reply reply reply reply reply reply reply It doesn't have the code tracing ability that my sibling is referring to, but it's better than nothing. reply reply reply reply reply We also let renovate[bot] (similar to dependabot) merge non-major dep updates if tests pass. I hardly notice when deps have small updates.https://github.com/search?q=org%3Amoov-io+is%3Apr+is%3Amerge... https://github.com/search?q=org%3Amoov-io+is%3Apr+is%3Amerge... reply https://fossa.com/products/fossabot/We have some of the best JS/TS analysis out there based on a custom static analysis engine designed for this use-case. You get free credits each month and we’d love feedback on which ecosystems are next…Java, Python?Totally agree with the author that static analysis like govulncheck is the secret weapon to success with this problem! Dynamic languages are just much harder.We have a really cool eval framework as well that we’ve blogged about. We have some of the best JS/TS analysis out there based on a custom static analysis engine designed for this use-case. You get free credits each month and we’d love feedback on which ecosystems are next…Java, Python?Totally agree with the author that static analysis like govulncheck is the secret weapon to success with this problem! Dynamic languages are just much harder.We have a really cool eval framework as well that we’ve blogged about. Totally agree with the author that static analysis like govulncheck is the secret weapon to success with this problem! Dynamic languages are just much harder.We have a really cool eval framework as well that we’ve blogged about. We have a really cool eval framework as well that we’ve blogged about. reply reply reply FossadepFossacheckFossasafe FossacheckFossasafe Fossasafe reply FossahappeninFossagoinon Fossagoinon reply reply reply reply reply reply If you want something more structured, I’ve been playing with and can recommend Renovate (no affiliation). Renovate supports far more ecosystems, has a better community and customisation.Having tried it I can’t believe how relatively poor Dependabot, the default tool is something we put up with by default. Take something simple like multi layer dockerfiles. This has been a docker features for a while now, yet it’s still silently unsupported by dependabot! Having tried it I can’t believe how relatively poor Dependabot, the default tool is something we put up with by default. Take something simple like multi layer dockerfiles. This has been a docker features for a while now, yet it’s still silently unsupported by dependabot! reply reply search revealed Sonatype Scan Gradle plugin. how is it? reply reply We're in this space and our approach was to supplement Dependabot rather than replace it. Our app (https://www.infield.ai) focuses more on the project management and team coordination aspect of dependency management. We break upgrade work down into three swim lanes: a) individual upgrades that are required in order to address a known security vulnerability (reactive, most addressed by Dependabot) b) medium-priority upgrades due to staleness or abandonedness, and c) framework upgrades that may take several months to complete, like upgrading Rails or Django. Our software helps you prioritize the work in each of these buckets, record what work has been done, and track your libyear over time so you can manage your maintenance rotation. reply GitHub actions is the biggest security risk in this whole setup.Honestly not that complicated. Honestly not that complicated. reply Absolutely wild. reply reply reply reply https://github.com/imjasonh/go-cooldownIt's not running anymore but you get the idea. It should be very easy to deploy anywhere you want. It's not running anymore but you get the idea. It should be very easy to deploy anywhere you want. reply reply (I'm a Renovate maintainer)(I agree with Filippo's post and it can also be applied to Renovate's security updates for Go modules - we don't have a way, right now, of ingesting better data sources like `govulncheck` when raising security PRs) (I agree with Filippo's post and it can also be applied to Renovate's security updates for Go modules - we don't have a way, right now, of ingesting better data sources like `govulncheck` when raising security PRs) reply reply That just reminds me that I got a Dependabot alert for CVE-2026-25727 – "time vulnerable to stack exhaustion Denial of Service attack" – across multiple of my repositories. reply reply reply reply I think that for FOSS the F as in Gratis is always going to be the root cause of security conflicts, if developers are not paid, security is always going to be a problem, you are trying to get something out of nothing otherwise, the accounting equation will not balance, exploiting someone else is precisely the act that leaves you open to exploitation (only according to Nash Game Theory). "158 projects need funding" IS the vector! I'm not saying that JohnDoe/react-openai-redux-widget is going to go rogue, but with what budget are they going to be able to secure their own systems?My advice is, if it ever comes the point where you need to install dependencies to control your growing dependency graph? consider deleting some dependencies instead. My advice is, if it ever comes the point where you need to install dependencies to control your growing dependency graph? consider deleting some dependencies instead. reply Isn't FOSS a combination of the diverging ideas of "Open Source" and "Free Software"? The "Free" in "Free Software" very much does not mean "Gratis". reply Go's tooling is exceptional here because the language was designed with this in mind - static analysis can trace exactly which symbols you import and call. govulncheck exploits this to give you meaningful alerts.The npm ecosystem is even worse because dynamic requires and monkey-patching make static analysis much harder. You end up with dependency scanners that can't distinguish between "this package could theoretically be vulnerable" and "your code calls the vulnerable function."The irony is that Dependabot's noise makes teams less secure, not more. When every PR has 12 security alerts, people stop reading them. Alert fatigue is a real attack surface. The npm ecosystem is even worse because dynamic requires and monkey-patching make static analysis much harder. You end up with dependency scanners that can't distinguish between "this package could theoretically be vulnerable" and "your code calls the vulnerable function."The irony is that Dependabot's noise makes teams less secure, not more. When every PR has 12 security alerts, people stop reading them. Alert fatigue is a real attack surface. The irony is that Dependabot's noise makes teams less secure, not more. When every PR has 12 security alerts, people stop reading them. Alert fatigue is a real attack surface. reply govulncheck solves this if your auditor understands it. But most third-party security questionnaires still ask "how do you handle dependency vulnerabilities?" and expect the answer to involve automated patching. Explaining that you run static analysis for symbol reachability and only update when actually affected is a harder sell than "we merge Dependabot PRs within 48 hours." reply
========================================
[SOURCE: https://en.wikipedia.org/wiki/Foursquare_City_Guide] | [TOKENS: 2504]
Contents Foursquare City Guide Foursquare City Guide, commonly known as Foursquare, is a local search-and-discovery mobile app developed by Foursquare Labs Inc. The app provides personalized recommendations of places near a user's current location based on users' previous browsing history and check-in history. The service was created in late 2008 by Dennis Crowley and Naveen Selvadurai and launched in 2009. Crowley had previously founded the similar project Dodgeball as his graduate thesis project in the Interactive Telecommunications Program (ITP) at New York University. Google bought Dodgeball in 2005 and shut it down in 2009, replacing it with Google Latitude. Dodgeball user interactions were based on SMS technology, rather than an application. Foursquare was similar but allowed for more features, allowing mobile device users to interact with their environment. Foursquare took advantage of new smartphones like the iPhone, which had built-in GPS to better detect a user's location. Until late July 2014, Foursquare featured a social networking layer that enabled a user to share their location with friends, via "check in"—a user would tell the application when they were at a particular location using a mobile website, text messaging, or a device-specific application by selecting from a list of venues the application locates nearby. In May 2014, the company launched Swarm, a companion app to Foursquare City Guide, that reimagined the social networking and location-sharing aspects of the service as a separate application. On August 7, 2014, the company launched Foursquare 8.0, a new version of the service. This version removed the check-in feature and location sharing, instead focusing on local search. On October 21, 2024, it was announced that the app would sunset on December 15, 2024, with the web version following on April 28, 2025. Foursquare Swarm will remain and gain features that were previously only available in the Foursquare app. Features Major features include local search and recommendations, tips and expertise, tastes, location detection, ratings, lists, superusers, brands, and Places API. Foursquare uses proprietary technology, Pilgrim, to detect a user's location. When users opt in to always-on location sharing, Pilgrim determines a user's current location by comparing historical check-in data with the user's current GPS signal, cell tower triangulation, cellular signal strength and surrounding wifi signals. The service provides ten levels of Superuser. Superuser status is awarded to users after they apply and perform a special test where users should meet quality and quantity criteria.[non-primary source needed] Only Superusers have the ability to edit venue information. Superusers can attain different levels as they contribute more high-quality edits over time. In the past, Foursquare has allowed companies to create pages of tips and users to "follow" the company and receive tips from them when they check in at certain locations. On July 25, 2012, Foursquare revealed Promoted Updates, an app update expected to create a new revenue generation stream for the company. The new program allowed companies to issue messages to Foursquare users about deals or available products. Foursquare's underlying technology is used by apps such as Uber and Twitter. Earlier versions of Foursquare supported check-ins and location sharing, but these were moved to the service's sibling app, Foursquare Swarm in 2014. In previous versions of Foursquare, if a user had checked into a venue on more days than anyone else in the past 60 days, then they would be crowned "Mayor" of that venue. Someone else could then earn the title by checking in more times than the previous mayor. Businesses could also incentivize mayorships through rewards for users who were the mayor (such as food and drink discounts). As the service grew, it became difficult to compete for mayorships in high-density areas where the service was popular. The mayorship feature was retired from version 8.0 and reimplemented in Swarm. Badges were earned by checking into venues. Some badges were tied to venue "tags" and the badge earned depended on the tags applied to the venue. Other badges were specific to a city, venue, event, or date. In September 2010, badges began to be awarded for completing tasks as well as checking in. In version 8.0, badges were retired, which upset some existing users. Earlier versions of the app also used a "points" system with users receiving a numerical score for each check-in, with over 100 bonuses to gain additional points, such as being first among friends to check into a place or becoming the venue's mayor. The use of gamification and game-design principles were integral features. In version 8.0 points and leaderboards were retired, but were reimplemented in the Swarm app. "Specials" were another feature of the app that acted as an incentive for Foursquare users to check in at new spots or revisit their favorite hangouts. Over 750,000 businesses offered "Specials" that included discounts and freebies. They were intended for businesses to persuade new and regular customers to visit their venues. "Specials" included anything from a free beer for the first check-in to 10% off at a restaurant. Swarm In May 2014, the company launched Swarm, a companion app to Foursquare, that migrated the social networking and location-sharing aspects of the service into a separate application. Swarm acts as a lifelogging tool for the user to keep a record of the places they have been, featuring statistics on the places they have been, and a search capability to recall places they have visited. Swarm also lets the user share where they have been with their friends, and see where their friends have been. Check-ins are rewarded with points, in the form of virtual coins, and friends can challenge each other in a weekly leaderboard. Checking in to different categories of venue also unlocks virtual stickers. Though it is not necessary to use both apps, Swarm works together with Foursquare to improve a user's recommendations: a user's Swarm check-ins help Foursquare understand the kinds of places they like to go. Availability Foursquare is available for Android and iOS devices. Versions of Foursquare were previously available for Symbian OS, Series 40, MeeGo, WebOS, Maemo, Windows Phone, Bada, BlackBerry OS, PlayStation Vita, and Windows 8.[non-primary source needed] Users may also use their mobile browsers to access Foursquare mobile, but feature phone users must search for venues manually instead of using GPS that most smartphone applications can use. History Foursquare started out in 2009 in 100 worldwide metro areas. In January 2010, Foursquare changed their location model to allow check-ins from any location worldwide. In September 2010, Foursquare announced version 2.0 of its check-in app which aimed to direct users to new locations and activities, rather than just sharing their location. Foursquare has also created a button that would add any location in the app to a user's to-do list, and the app would now remind the user when there were to-do items nearby. On February 21, 2011, Foursquare reached 7 million users IDs. The company was expected to pass 750 million check-ins before the end of June 2011, with an average of about 3 million check-ins per day. On August 8, 2011, President Barack Obama joined Foursquare, with the intention that the staff at the White House would use the service to post tips from places the president has visited. In 2011, user demographics showed a roughly equal split between male and female user accounts, with 50 percent of users registered outside of the US. Most recent statistics show Foursquare with approximately 55 million monthly active users. On June 7, 2012, Foursquare launched a major redesign, which they described as a "whole new app". The app's "explore" function now allowed users to browse locations by category or conduct specific searches like "free wi-fi" or "dumplings". Foursquare incorporated features from social discovery, and local search applications as well as the "like" feature made famous by Facebook. In May 2014, Foursquare launched Swarm, a companion app to Foursquare City Guide, which moved the social networking and location-sharing aspects of the service to a separate application. On August 7, 2014, the company launched Foursquare 8.0, a new version of the service which removed location-sharing and check-in features, pivoting to local search instead. Foursquare Day Foursquare Day was coined by Nate Bonilla-Warford, an optometrist from Tampa, Florida, on March 12, 2010. The idea came to him while "thinking about new ways to promote his business". In 2010, McDonald's launched a spring pilot program that took advantage of Foursquare Day. Foursquare users who checked into McDonald's restaurants on Foursquare Day were given the chance to win gift cards in $5 and $10 increments. Mashable reported that there was a "33% increase in foot traffic" to McDonald's venues, as apparent in the increase in Foursquare check-ins. Privacy In February 2010, a site known as Please Rob Me was launched, a site which scraped data from public Twitter messages that had been pushed through Foursquare, to list people who were not at home. The purpose of the site was to raise awareness about the potential thoughtlessness of location sharing. In March 2010, a privacy issue was observed for users who connected their Twitter account to Foursquare. If the user was joined at a location by one of their Foursquare contacts who was also using Twitter, that user could allow Foursquare to post a message such as "I am at Starbucks – Santa Clara (link to map) w/@mediaphyter" to their own Twitter feed. Similarly, if a user had agreed to Foursquare location sharing, that user's Foursquare contacts would be able to share their location publicly on Twitter. Later in 2010, white hat hacker Jesper Andersen discovered a vulnerability on Foursquare that raised privacy concerns. Foursquare's location pages display a grid of 50 randomly generated photos regardless of their privacy settings. Whenever a user "checks in" at that location, their picture is generated on that location page, even if they only want their friends to know where they are. Andersen then crafted a script that collected check-in information. It is estimated that Andersen collected around 875,000 check-ins. Andersen contacted Foursquare about the vulnerability, and Foursquare responded by fixing their privacy settings. In 2011, in response to privacy issues regarding social networking sites, Foursquare co-founder Naveen Selvadurai stated that "Users decide if they want to push to Twitter or Facebook, over what information they want to share and send" and "There is a lot of misunderstanding about location-based services. On Foursquare, if you don't want people to know you are on a date or with a friend at a certain place, then you don't have to let people know. You don't check in." Selvadurai also stated that Foursquare does not passively track users, which means a user has to actively check in to let people know where they are. On May 8, 2012, Foursquare developers changed its API in response to a number of "stalker" applications which had been making the locations of all female users within a specific area available to the public. In late December 2012, Foursquare updated its privacy policy to indicate it would display users' full names, as opposed to an initial for a surname. In addition, companies could view a more detailed overview of visitors who have checked into their businesses throughout the day. Foursquare has since updated both its privacy policy and cookies policy to detail how location data is used in new features and products. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_ref-7] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-IJA-2014October_28-0] | [TOKENS: 11349]
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_Israel#Achaemenid_period_(538–332_BCE)] | [TOKENS: 14912]
Contents History of Israel The history of Israel covers the Southern Levant region also known as Canaan, Palestine, or the Holy Land, which is the location of Israel and Palestine. From prehistory, as part of the Levantine corridor, the area witnessed waves of early humans from Africa, then the emergence of Natufian culture c. 10,000 BCE. The region entered the Bronze Age c. 2,000 BCE with the development of Canaanite civilization. In the Iron Age, the kingdoms of Israel and Judah were established, entities central to the origins of the Abrahamic religions. This has given rise to Judaism, Samaritanism, Christianity, Islam, Druzism, Baha'ism. The Land of Israel has seen many conflicts, been controlled by various polities, and hosted various ethnic groups. In the following centuries, the Assyrian, Babylonian, Achaemenid, and Macedonian empires conquered the region. Ptolemies and Seleucids vied for control during the Hellenistic period. Through the Hasmonean dynasty, the Jews maintained independence for a century before incorporation into the Roman Republic. As a result of the Jewish–Roman wars in the 1st and 2nd centuries CE, many Jews were killed, or sold into slavery. Following the advent of Christianity, demographics shifted towards newfound Christians, who replaced Jews as the majority by the 4th century. In the 7th century, Byzantine Christian rule over Israel was superseded in the Muslim conquest of the Levant by the Rashidun Caliphate, to later be ruled by the Umayyad, Abbasid, and Fatimid caliphates, before being conquered by the Seljuks in the 1070s. Throughout the 12th and 13th centuries, the Land of Israel saw wars between Christians and Muslims as part of the Crusades, with the Kingdom of Jerusalem overrun by Saladin's Ayyubids in the 12th century. The Crusaders hung on to decreasing territories for another century. In the 13th century, the Land of Israel became subject to Mongol conquest, though this was stopped by the Mamluk Sultanate, under whose rule it remained until the 16th century. The Mamluks were defeated by the Ottoman Empire, and the region became an Ottoman province until the early 20th century. The 19th century saw the rise of a Jewish nationalist movement in Europe known as Zionism; aliyah, Jewish immigration to Israel from the diaspora, increased. During World War I, the Sinai and Palestine campaign of the Allies led to the partition of the Ottoman Empire. Britain was granted control of the region by a League of Nations mandate, known as Mandatory Palestine. The British committed to the creation of a Jewish homeland in the 1917 Balfour Declaration. Palestinian Arabs sought to prevent Jewish immigration, and tensions grew during British administration. In 1947, the UN voted for the partition of Mandate Palestine and creation of a Jewish and Arab state. The Jews accepted the plan, while the Arabs rejected it. A civil war ensued, won by the Jews. In May 1948, the Israeli Declaration of Independence sparked the 1948 War in which Israel repelled the armies of the neighbouring states. It resulted in the 1948 Palestinian expulsion and flight and led to Jewish emigration from other parts of the Middle East. About 40% of the global Jewish population resides in Israel. In 1979, the Egypt–Israel peace treaty was signed. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite a long-running Israeli–Palestinian peace process, the conflict continues. Prehistory The oldest evidence of early humans in the territory of modern Israel, dating to 1.5 million years ago, was found in Ubeidiya near the Sea of Galilee. Flint tool artefacts have been discovered at Yiron, the oldest stone tools found anywhere outside Africa.[dubious – discuss] The Daughters of Jacob Bridge over the Jordan River provides evidence of the control of fire by early humans around 780,000 years ago, one of the oldest known examples. In the Mount Carmel area at el-Tabun, and Es Skhul, Neanderthal and early modern human remains were found, showing the longest stratigraphic record in the region, spanning 600,000 years of human activity, from the Lower Paleolithic to the present day, representing roughly a million years of human evolution. Other significant Paleolithic sites include Qesem cave. A 200,000-year-old fossil from Misliya Cave is the second-oldest evidence of anatomically modern humans found outside Africa. Other notable finds include the Skhul and Qafzeh hominins, as well as Manot 1. Around 10th millennium BCE, the Natufian culture existed in the area. The beginning of agriculture in the region during the Neolithic Revolution is evidenced by sites such as Nahal Oren and Gesher. Here is one of the more common periodisations. Bronze Age Canaan The Canaanites are archaeologically attested in the Middle Bronze Age (2100–1550 BCE). There were probably independent or semi-independent city-states. Cities were often surrounded by massive earthworks, resulting in the archaeological mounds, or 'tells' common in the region today. In the late Middle Bronze Age, the Nile Delta in Egypt was settled by Canaanites who maintained close connections with Canaan. During that period, the Hyksos, dynasties of Canaanite/Asiatic origin, ruled much of Lower Egypt before being overthrown in the 16th century BCE. During the Late Bronze Age (1550–1200 BCE), there were Canaanite vassal states paying tribute to the New Kingdom of Egypt, which governed from Gaza. In 1457 BCE, Egyptian forces under the command of Pharaoh Thutmose III defeated a rebellious coalition of Canaanite vassal states led by Kadesh's king at the Battle of Megiddo. In the Late Bronze Age there was a period of civilizational collapse in the Middle East, Canaan fell into chaos, and Egyptian control ended. There is evidence that urban centers such as Hazor, Beit She'an, Megiddo, Ekron, Isdud and Ascalon were damaged or destroyed. Two groups appear at this time, and are associated with the transition to the Iron Age (they used iron weapons/tools which were better than earlier bronze): the Sea Peoples, particularly the Philistines, who migrated from the Aegean world and settled on the southern coast, and the Israelites, whose settlements dotted the highlands. Some 2nd millennium inscriptions about the semi-nomadic Habiru people are believed to be connected to the Hebrews, who were generally synonymous with the Biblical Israelites. Many scholars regard this connection to be plausible since the two ethnonyms have similar etymologies, although others argue that Habiru refers to a social class found in every Near Eastern society, including Hebrew societies. Ancient Israel and Judah: Iron Age to Babylonian period The earliest recorded evidence of a people by the name of Israel (as ysrỉꜣr) occurs in the Egyptian Merneptah Stele, erected for Pharaoh Merneptah c. 1209 BCE. Archeological evidence indicates that during the early Iron Age I, hundreds of small villages were established on the highlands of Canaan on both sides of the Jordan River, primarily in Samaria, north of Jerusalem. These villages had populations of up to 400, were largely self-sufficient and lived from herding, grain cultivation, and growing vines and olives with some economic interchange. The pottery was plain and undecorated. Writing was known and available for recording, even in small sites. William G. Dever sees this "Israel" in the central highlands as a cultural and probably political entity, more an ethnic group rather than an organized state. Modern scholars believe that the Israelites and their culture branched out of the Canaanite peoples and their cultures through the development of a distinct monolatristic—and later monotheistic—religion centred on a national god Yahweh. According to McNutt, "It is probably safe to assume that sometime during Iron Age I a population began to identify itself as 'Israelite'", differentiating itself from the Canaanites through such markers as the prohibition of intermarriage, an emphasis on family history and genealogy, and religion. Philistine cooking tools and the prevalence of pork in their diets, and locally made Mycenaean pottery—which later evolved into bichrome Philistine pottery—all support their foreign origin. Their cities were large and elaborate, which—together with the findings—point to a complex, hierarchical society. Israel Finkelstein believes that the oldest Abraham traditions originated in the Iron Age, which focus on the themes of land and offspring and possibly, his altars in Hebron. Abraham's Mesopotamian heritage is not discussed. In the 10th century BCE, the Israelite kingdoms of Judah and Israel emerged. The Hebrew Bible states that these were preceded by a single kingdom ruled by Saul, David and Solomon, who is said to have built the First Temple. Archaeologists have debated whether the united monarchy ever existed,[Notes 1] with those in favor of such a polity existing further divided between maximalists who support the Biblical accounts, and minimalists who argue that any such polity was likely smaller than suggested. Historians and archaeologists agree that the northern Kingdom of Israel existed by ca. 900 BCE and the Kingdom of Judah existed by ca. 850 BCE. The Kingdom of Israel was the more prosperous of the two kingdoms and soon developed into a regional power; during the days of the Omride dynasty, it controlled Samaria, Galilee, the upper Jordan Valley, the Sharon and large parts of the Transjordan. Samaria, the capital, was home to one of the largest Iron Age structures in the Levant. The Kingdom of Israel's capital moved between Shechem, Penuel and Tirzah before Omri settled it in Samaria, and the royal succession was often settled by a military coup d'état. The Kingdom of Judah was smaller but more stable; the Davidic dynasty ruled the kingdom for the four centuries of its existence, with the capital always in Jerusalem, controlling the Judaean Mountains, most of the Shephelah and the Beersheba valley in the northern Negev. In 854 BCE, according to the Kurkh Monoliths, an alliance between Ahab of Israel and Ben Hadad II of Aram-Damascus managed to repulse the incursions of the Assyrians, with a victory at the Battle of Qarqar. Another important discovery of the period is the Mesha Stele, a Moabite stele found in Dhiban when Emir Sattam Al-Fayez led Henry Tristram to it as they toured the lands of the vassals of the Bani Sakher. The stele is now in the Louvre. In the stele, Mesha, king of Moab, tells how Chemosh, the god of Moab, had been angry with his people and had allowed them to be subjugated to the Kingdom of Israel, but at length, Chemosh returned and assisted Mesha to throw off the yoke of Israel and restore the lands of Moab. It refers to Omri, king of Israel, to the god Yahweh, and may contain another early reference to the House of David. The Kingdom of Israel fell to the Assyrians following a long siege of the capital Samaria around 720 BCE. The records of Sargon II indicate that he captured Samaria and deported 27,290 inhabitants to Mesopotamia. It is likely that Shalmaneser captured the city since both the Babylonian Chronicles and the Hebrew Bible viewed the fall of Israel as the signature event of his reign. The Assyrian deportations became the basis for the Jewish idea of the Ten Lost Tribes. Foreign groups were settled by the Assyrians in the territories of the fallen kingdom. The Samaritans claim to be descended from Israelites of ancient Samaria who were not expelled by the Assyrians. It is believed that refugees from the destruction of Israel moved to Judah, massively expanding Jerusalem and leading to construction of the Siloam Tunnel during the rule of King Hezekiah (ruled 715–686 BCE). The Siloam inscription, a plaque written in Hebrew left by the construction team, was discovered in the tunnel in 1880s, and is today held by the Istanbul Archaeology Museum. During Hezekiah's rule, Sennacherib, the son of Sargon, attempted but failed to capture Judah. Assyrian records say that Sennacherib levelled 46 walled cities and besieged Jerusalem, leaving after receiving extensive tribute. Sennacherib erected the Lachish reliefs in Nineveh to commemorate a second victory at Lachish. The writings of four different "prophets" are believed to date from this period: Hosea and Amos in Israel and Micah and Isaiah of Judah. These men were mostly social critics who warned of the Assyrian threat and acted as religious spokesmen. They exercised some form of free speech and may have played a significant social and political role in Israel and Judah. They urged rulers and the general populace to adhere to god-conscious ethical ideals, seeing the Assyrian invasions as a divine punishment of the collective resulting from ethical failures. Under King Josiah (ruler from 641 to 619 BCE), the Book of Deuteronomy was either rediscovered or written. The Book of Joshua and the accounts of the kingship of David and Solomon in the Book of Kings are believed to have the same author. The books are known as Deuteronomist and considered to be a key step in the emergence of monotheism in Judah. They emerged at a time that Assyria was weakened by the emergence of Babylon and may be a committing to text of pre-writing verbal traditions. During the late 7th century BCE, Judah became a vassal state of the Neo-Babylonian Empire. In 601 BCE, Jehoiakim of Judah allied with Babylon's principal rival, Egypt, despite the strong remonstrances of the prophet Jeremiah. As a punishment, the Babylonians besieged Jerusalem in 597 BCE, and the city surrendered. The defeat was recorded by the Babylonians. Nebuchadnezzar pillaged Jerusalem and deported king Jechoiachin (Jeconiah), along with other prominent citizens, to Babylon; Zedekiah, his uncle, was installed as king. A few years later, Zedekiah launched another revolt against Babylon, and an army was sent to conquer Jerusalem. In 587 or 586 BCE, King Nebuchadnezzar II of Babylon conquered Jerusalem, destroyed the First Temple and razed the city. The Kingdom of Judah was abolished, and many of its citizens were exiled to Babylon. The former territory of Judah became a Babylonian province called Yehud with its center in Mizpah, north of the destroyed Jerusalem. Tablets that describe King Jehoiachin's rations were found in the ruins of Babylon. He was eventually released by the Babylonians. According to both the Bible and the Talmud, the Davidic dynasty continued as head of Babylonian Jewry, called the "Rosh Galut" (exilarch or head of exile). Arab and Jewish sources show that the Rosh Galut continued to exist for another 1,500 years in what is now Iraq, ending in the eleventh century. Second Temple period In 538 BCE, Cyrus the Great of the Achaemenid Empire conquered Babylon and took over its empire. Cyrus issued a proclamation granting religious freedom to all peoples subjugated by the Babylonians (see the Cyrus Cylinder). According to the Bible, Jewish exiles in Babylon, including 50,000 Judeans led by Zerubabel, returned to Judah to rebuild the Temple in Jerusalem. The Second Temple was subsequently completed c. 515 BCE. A second group of 5,000, led by Ezra and Nehemiah, returned to Judah in 456 BCE. The first was empowered by the Persian king to enforce religious rules, the second had the status of governor and a royal mission to restore the walls of the city. The country remained a province of the Achaemenid empire called Yehud until 332 BCE. The final text of the Torah is thought to have been written during the Persian period (probably 450–350 BCE). The text was formed by editing and unifying earlier texts. The returning Israelites adopted an Aramaic script (also known as the Ashuri alphabet), which they brought back from Babylon; this is the current Hebrew script. The Hebrew calendar closely resembles the Babylonian calendar and probably dates from this period. The Bible describes tension between the returnees, the elite of the First Temple period, and those who had remained in Judah. It is possible that the returnees, supported by the Persian monarchy, became large landholders at the expense of the people who had remained to work the land in Judah, whose opposition to the Second Temple would have reflected a fear that exclusion from the cult would deprive them of land rights. Judah had become in practice a theocracy, ruled by hereditary High Priests and a Persian-appointed governor, frequently Jewish, charged with keeping order and seeing that tribute was paid. A Judean military garrison was placed by the Persians on Elephantine Island near Aswan in Egypt. In the early 20th century, 175 papyrus documents recording activity in this community were discovered, including the "Passover Papyrus", a letter instructing the garrison on how to correctly conduct the Passover feast. In 332 BCE, Alexander the Great of Macedon conquered the region as part of his campaign against the Achaemenid Empire. After his death in 322 BCE, his generals divided the empire and Judea became a frontier region between the Seleucid Empire and Ptolemaic Kingdom in Egypt. Following a century of Ptolemaic rule, Judea was conquered by the Seleucid Empire in 200 BCE at the battle of Panium. Hellenistic rulers generally respected Jewish culture and protected Jewish institutions. Judea was ruled by the hereditary office of the High Priest of Israel as a Hellenistic vassal. Nevertheless, the region underwent a process of Hellenization, which heightened tensions between Greeks, Hellenized Jews, and observant Jews. These tensions escalated into clashes involving a power struggle for the position of high priest and the character of the holy city of Jerusalem. When Antiochus IV Epiphanes consecrated the temple, forbade Jewish practices, and forcibly imposed Hellenistic norms on the Jews, several centuries of religious tolerance under Hellenistic control came to an end. In 167 BCE, the Maccabean revolt erupted after Mattathias, a Jewish priest of the Hasmonean lineage, killed a Hellenized Jew and a Seleucid official who participated in sacrifice to the Greek gods in Modi'in. His son Judas Maccabeus defeated the Seleucids in several battles, and in 164 BCE, he captured Jerusalem and restored temple worship, an event commemorated by the Jewish festival of Hannukah. After Judas' death, his brothers Jonathan Apphus and Simon Thassi were able to establish and consolidate a vassal Hasmonean state in Judea, capitalizing on the Seleucid Empire's decline as a result of internal instability and wars with the Parthians, and by forging ties with the rising Roman Republic. Hasmonean leader John Hyrcanus was able to gain independence, doubling Judea's territories. He took control of Idumaea, where he converted the Edomites to Judaism, and invaded Scythopolis and Samaria, where he demolished the Samaritan Temple. Hyrcanus was also the first Hasmonean leader to mint coins. Under his sons, kings Aristobulus I and Alexander Jannaeus, Hasmonean Judea became a kingdom, and its territories continued to expand, now also covering the coastal plain, Galilee and parts of the Transjordan. Some scholars argue that the Hasmonean dynasty also institutionalized the final Jewish biblical canon. Under Hasmonean rule, the Pharisees, Sadducees and the mystic Essenes emerged as the principal Jewish social movements. The Pharisee sage Simeon ben Shetach is credited with establishing the first schools based around meeting houses. This was a key step in the emergence of Rabbinical Judaism. After Jannaeus' widow, queen Salome Alexandra, died in 67 BCE, her sons Hyrcanus II and Aristobulus II engaged in a civil war over succession. The conflicting parties requested Pompey's assistance on their behalf, which paved the way for a Roman takeover of the kingdom. In 63 BCE, the Roman Republic conquered Judaea, ending Jewish independence under the Hasmoneans. Roman general Pompey intervened in a dynastic civil war and, after capturing Jerusalem, reinstated Hyrcanus II as high priest but denied him the title of king. Rome soon installed the Herodian dynasty—of Idumean descent but Jewish by conversion—as a loyal replacement for the nationalist Hasmoneans. In 37 BCE, Herod the Great, the first client king of this line, took power after defeating the restored Hasmonean king Antigonus II Mattathias. Herod imposed heavy taxes, suppressed opposition, and centralized authority, which fostered widespread resentment. Herod also carried out major monumental construction projects throughout his kingdom, and significantly expanded the Second Temple, which he transformed into one of the largest religious structures in the ancient world. After his death in 4 BCE, his kingdom was divided among his sons into a tetrarchy under continued Roman oversight. In 6 CE, Roman emperor Augustus transformed Judaea into a Roman province, deposing its last Jewish ruler, Herod Archelaus, and appointing a Roman governor in his place. That same year, a census triggered a small uprising by Judas of Galilee, the founder of a movement that rejected foreign authority and recognized only God as king. Over the next six decades, with the brief exception of a short period of Jewish autonomy under the client king Herod Agrippa I, the province remained under direct Roman administration. Some governors ruled with brutality and showed little regard for Jewish religious sensitivities, deepening resentment among the local population. This discontent was also fueled by poor governance, corruption, and growing economic inequality, along with rising tensions between Jews and neighboring populations over ethnic, religious, and territorial disputes. At the same time, collective memory of the Maccabean revolt and the period of Hasmonean independence continued to inspire hopes for national liberation from Roman control. In 64 CE, the Temple High Priest Joshua ben Gamla introduced a religious requirement for Jewish boys to learn to read from the age of six. Over the next few hundred years this requirement became steadily more ingrained in Jewish tradition. The Jewish–Roman wars were a series of large-scale revolts by Jewish subjects against the Roman Empire between 66 and 135 CE. The term primarily applies to the First Jewish–Roman War (66–73 CE) and the Bar Kokhba revolt (132–136 CE), both nationalist rebellions aimed at restoring Jewish independence in Judea. Some sources also include the Diaspora Revolt (115–117 CE), an ethno-religious conflict fought across the Eastern Mediterranean and including the Kitos War in Judaea. The Jewish–Roman wars had a devastating impact on the Jewish people, transforming them from a major population in the Eastern Mediterranean into a dispersed and persecuted minority. The First Jewish-Roman War culminated in the destruction of Jerusalem and other towns and villages in Judaea, resulting in significant loss of life and a considerable segment of the population being uprooted or displaced. Those who remained were stripped of any form of political autonomy. Subsequently, the brutal suppression of the Bar Kokhba revolt resulted in even more severe consequences. Judea witnessed a significant depopulation, as many Jews were killed, expelled, or sold into slavery. The outcome of the conflict marked the termination of efforts to reestablish a Jewish state until the modern era. Jews were banned from residing in the vicinity of Jerusalem, which the Romans rebuilt into the pagan colony of Aelia Capitolina, and the province of Judaea was renamed Syria Palaestina. Collectively, these events enhanced the role of Jewish diaspora, relocating the Jewish demographic and cultural center to Galilee and eventually to Babylonia, with smaller communities across the Mediterranean, the Middle East, and beyond. The Jewish–Roman wars also had a major impact on Judaism, after the central worship site of Second Temple Judaism, the Second Temple in Jerusalem, was destroyed by Titus's troops in 70 CE. The destruction of the Temple led to a transformation in Jewish religious practices, emphasizing prayer, Torah study, and communal gatherings in synagogues. This pivotal shift laid the foundation for the emergence of Rabbinic Judaism, which has been the dominant form of Judaism since late antiquity, after the codification of the Babylonian Talmud. Late Roman and Byzantine periods As a result of the disastrous effects of the Bar Kokhba revolt, Jewish presence in the region significantly dwindled. Over the next centuries, more Jews left to communities in the Diaspora, especially the large, speedily growing Jewish communities in Babylonia and Arabia. Others remained in the Land of Israel, where the spiritual and demographic center shifted from the depopulated Judea to Galilee. Jewish presence also continued in the southern Hebron Hills, in Ein Gedi, and on the coastal plain. The Mishnah and the Jerusalem Talmud, huge compendiums of Rabbinical discussions, were compiled during the 2nd to 4th centuries CE in Tiberias and Jerusalem. Following the revolt, Judea's countryside was penetrated by pagan populations, including migrants from the nearby provinces of Syria, Phoenicia, and Arabia, whereas Aelia Capitolina, its immediate vicinity, and administrative centers were now inhabited by Roman veterans and settlers from the western parts of the empire. The Romans permitted a hereditary Rabbinical Patriarch from the House of Hillel, called the "Nasi", to represent the Jews in dealings with the Romans. One prominent figure was Judah ha-Nasi, credited with compiling the final version of the Mishnah, a vast collection of Jewish oral traditions. He also emphasized the importance of education in Judaism, leading to requirements that illiterate Jews be treated as outcasts. This might have contributed to some illiterate Jews converting to Christianity. Jewish seminaries, such as those at Shefaram and Bet Shearim, continued to produce scholars. The best of these became members of the Sanhedrin, which was located first at Sepphoris and later at Tiberias. In the Galillee, many synagogues have been found dating from this period, and the burial site of the Sanhedrin leaders was discovered in Beit She'arim. In the 3rd century, the Roman Empire faced an economic crisis and imposed heavy taxation to fund wars of imperial succession. This situation prompted additional Jewish migration from Syria Palaestina to the Sasanian Empire, known for its more tolerant environment; there, a flourishing Jewish community with important Talmudic academies thrived in Babylonia, engaging in a notable rivalry with the Talmudic academies of Palaestina. Early in the 4th century, the Emperor Constantine made Constantinople the capital of the East Roman Empire and made Christianity an accepted religion. His mother Helena made a pilgrimage to Jerusalem (326–328) and led the construction of the Church of the Nativity (birthplace of Jesus in Bethlehem), the Church of the Holy Sepulchre (burial site of Jesus in Jerusalem) and other key churches that still exist. The name Jerusalem was restored to Aelia Capitolina and became a Christian city. Jews were still banned from living in Jerusalem, but were allowed to visit and worship at the site of the ruined temple. Over the course of the next century Christians worked to eradicate "paganism", leading to the destruction of classical Roman traditions and eradication of their temples. In 351–2, another Jewish revolt in the Galilee erupted against a corrupt Roman governor. The Roman Empire split in 390 CE and the region became part of the Eastern Roman Empire, known as the Byzantine Empire. Under Byzantine rule, much of the region and its non-Jewish population were won over by Christianity, which eventually became the dominant religion in the region. The presence of holy sites drew Christian pilgrims, some of whom chose to settle, contributing to the rise of a Christian majority. Christian authorities encouraged this pilgrimage movement and appropriated lands, constructing magnificent churches at locations linked to biblical narratives. Additionally, monks established monasteries near pagan settlements, encouraging the conversion of local pagans. During the Byzantine period, the Jewish presence in the region declined, and it is believed that Jews lost their majority status in Palestine in the fourth century. While Judaism remained the sole non-Christian religion tolerated, restrictions on Jews gradually increased, prohibiting the construction of new places of worship, holding public office, or owning Christian slaves. In 425, after the death of the last Nasi, Gamliel VI, the Nasi office and the Sanhedrin were officially abolished, and the standing of yeshivot weakened. The leadership void was gradually filled by the Jewish center in Babylonia, which would assume a leading role in the Jewish world for generations after the Byzantine period. During the 5th and 6th centuries CE, the region witnessed a series of Samaritan revolts against Byzantine rule. Their suppression resulted in the decline of Samaritan presence and influence, and further consolidated Christian domination. Though it is acknowledged that some Jews and Samaritans converted to Christianity during the Byzantine period, the reliable historical records are limited, and they pertain to individual conversions rather than entire communities. In 611, Khosrow II, ruler of Sassanid Persia, invaded the Byzantine Empire. He was helped by Jewish fighters recruited by Benjamin of Tiberias and captured Jerusalem in 614. The "True Cross" was captured by the Persians. The Jewish Himyarite Kingdom in Yemen may also have provided support. Nehemiah ben Hushiel was made governor of Jerusalem. Christian historians of the period claimed the Jews massacred Christians in the city, but there is no archeological evidence of destruction, leading modern historians to question their accounts. In 628, Kavad II (son of Kosrow) returned Palestine and the True Cross to the Byzantines and signed a peace treaty with them. Following the Byzantine re-entry, Heraclius massacred the Jewish population of Galilee and Jerusalem, while renewing the ban on Jews entering the latter. Early Muslim period The Levant was conquered by an Arab army under the command of ʿUmar ibn al-Khaṭṭāb in 635, and became the province of Bilad al-Sham of the Rashidun Caliphate. Two military districts—Jund Filastin and Jund al-Urdunn—were established in Palestine. A new city called Ramlah was built as the Muslim capital of Jund Filastin, while Tiberias served as the capital of Jund al-Urdunn. The Byzantine ban on Jews living in Jerusalem came to an end. In 661, Mu'awiya I was crowned Caliph in Jerusalem, becoming the first of the (Damascus-based) Umayyad dynasty. In 691, Umayyad Caliph Abd al-Malik (685–705) constructed the Dome of the Rock shrine on the Temple Mount, where the two Jewish temples had been located. A second building, the Al-Aqsa Mosque, was also erected on the Temple Mount in 705. Both buildings were rebuilt in the 10th century following a series of earthquakes. In 750, Arab discrimination against non-Arab Muslims led to the Abbasid Revolution and the Umayyads were replaced by the Abbasid Caliphs who built a new city, Baghdad, to be their capital. This period is known as the Islamic Golden Age, the Arab Empire was the largest in the world and Baghdad the largest and richest city. Both Arabs and minorities prospered across the region and much scientific progress was made. There were however setbacks: During the 8th century, the Caliph Umar II introduced a law requiring Jews and Christians to wear identifying clothing. Jews were required to wear yellow stars round their neck and on their hats, Christians had to wear Blue. Clothing regulations arose during repressive periods of Arab rule and were more designed to humiliate then persecute non-Muslims. A poll tax was imposed on all non-Muslims by Islamic rulers and failure to pay could result in imprisonment or worse. In 982, Caliph Al-Aziz Billah of the Cairo-based Fatimid dynasty conquered the region. The Fatimids were followers of Isma'ilism, a branch of Shia Islam and claimed descent from Fatima, Mohammed's daughter. Around the year 1010, the Church of Holy Sepulchre (believed to be Jesus burial site), was destroyed by Fatimid Caliph al-Hakim, who relented ten years later and paid for it to be rebuilt. In 1020 al-Hakim claimed divine status and the newly formed Druze religion gave him the status of a messiah. Although the Arab conquest was relatively peaceful and did not cause widespread destruction, it did alter the country's demographics significantly. Over the ensuing several centuries, the region experienced a drastic decline in its population, from an estimated 1 million during Roman and Byzantine times to some 300,000 by the early Ottoman period. This demographic collapse was accompanied by a slow process of Islamization, that resulted from the flight of non-Muslim populations, immigration of Muslims, and local conversion. The majority of the remaining populace belonged to the lowest classes. While the Arab conquerors themselves left the area after the conquest and moved on to other places, the settlement of Arab tribes in the area both before and after the conquest also contributed to the Islamization. As a result, the Muslim population steadily grew and the area became gradually dominated by Muslims on a political and social level. During the early Islamic period, many Christians and Samaritans, belonging to the Byzantine upper class, migrated from the coastal cities to northern Syria and Cyprus, which were still under Byzantine control, while others fled to the central highlands and the Transjordan. As a result, the coastal towns, formerly important economic centers connected with the rest of the Byzantine world, were emptied of most of their residents. Some of these cities—namely Ashkelon, Acre, Arsuf, and Gaza—now fortified border towns, were resettled by Muslim populations, who developed them into significant Muslim centers. The region of Samaria also underwent a process of Islamization as a result of waves of conversion among the Samaritan population and the influx of Muslims into the area. The predominantly Jacobite Monophysitic Christian population had been hostile to Byzantium orthodoxy, and at times for that reason welcomed Muslim rule. There is no strong evidence for forced conversion, or that the jizya tax significantly affected such changes. The demographic situation in Palestine was further altered by urban decline under the Abbasids, and it is thought that the 749 earthquake hastened this process by causing an increase in the number of Jews, Christians, and Samaritans who emigrated to diaspora communities while also leaving behind others who remained in the devastated cities and poor villages until they converted to Islam. Historical records and archeological evidence suggest that many Samaritans converted under Abbasid and Tulunid rule, after suffering through severe difficulties such droughts, earthquakes, religious persecution, heavy taxes and anarchy. The same region also saw the settlement of Arabs. Over the period, the Samaritan population drastically decreased, with the rural Samaritan population converting to Islam, and small urban communities remaining in Nablus and Caesarea, as well as in Cairo, Damascus, Aleppo and Sarepta. Nevertheless, the Muslim population remained a minority in a predominantly Christian area, and it is likely that this status persisted until the Crusader period. Crusades and Mongols In 1095, Pope Urban II called upon Christians to wage a holy war and recapture Jerusalem from Muslim rule. Responding to this call, Christians launched the First Crusade in the same year, a military campaign aimed at retaking the Holy Land, ultimately resulting in the successful siege and conquest of Jerusalem in 1099. In the same year, the Crusaders conquered Beit She'an and Tiberias, and in the following decade, they captured coastal cities with the support of Italian city-state fleets, establishing these coastal ports as crucial strongholds for Crusader rule in the region. Following the First Crusade, several Crusader states were established in the Levant, with the Kingdom of Jerusalem (Regnum Hierosolymitanum) assuming a preeminent position and enjoying special status among them. The population consisted predominantly of Muslims, Christians, Jews, and Samaritans, while the Crusaders remained a minority and relied on the local population who worked the soil. The region saw the construction of numerous robust castles and fortresses, yet efforts to establish permanent European villages proved unsuccessful. Around 1180, Raynald of Châtillon, ruler of Transjordan, caused increasing conflict with the Ayyubid Sultan Saladin (Salah-al-Din), leading to the defeat of the Crusaders in the 1187 Battle of Hattin (above Tiberias). Saladin was able to peacefully take Jerusalem and conquered most of the former Kingdom of Jerusalem. Saladin's court physician was Maimonides, a refugee from Almohad (Muslim) persecution in Córdoba, Spain, where all non-Muslim religions had been banned. The Christian world's response to the loss of Jerusalem came in the Third Crusade of 1190. After lengthy battles and negotiations, Richard the Lionheart and Saladin concluded the Treaty of Jaffa in 1192 whereby Christians were granted free passage to make pilgrimages to the holy sites, while Jerusalem remained under Muslim rule. In 1229, Jerusalem peacefully reverted into Christian control as part of a treaty between Holy Roman Emperor Frederick II and Ayyubid sultan al-Kamil that ended the Sixth Crusade. In 1244, Jerusalem was sacked by the Khwarezmian Tatars who decimated the city's Christian population, drove out the Jews and razed the city. The Khwarezmians were driven out by the Ayyubids in 1247. Mamluk period Between 1258 and 1291, the area was the frontier between Mongol invaders (occasional Crusader allies) and the Mamluks of Egypt. The conflict impoverished the country and severely reduced the population. In Egypt a caste of warrior slaves, known as the Mamluks, gradually took control of the kingdom. The Mamluks were mostly of Turkish origin, and were bought as children and then trained in warfare. They were highly prized warriors, who gave rulers independence of the native aristocracy. In Egypt they took control of the kingdom following a failed invasion by the Crusaders (Seventh Crusade). The first Mamluk Sultan, Qutuz of Egypt, defeated the Mongols in the Battle of Ain Jalut ("Goliath's spring" near Ein Harod), ending the Mongol advances. He was assassinated by one of his Generals, Baibars, who went on to eliminate most of the Crusader outposts. The Mamluks ruled Palestine until 1516, regarding it as part of Syria. In Hebron, Jews were banned from worshipping at the Cave of the Patriarchs (the second-holiest site in Judaism); they were only allowed to enter 7 steps inside the site and the ban remained in place until Israel assumed control of the West Bank in the Six-Day War.[undue weight? – discuss] The Egyptian Mamluk sultan Al-Ashraf Khalil conquered the last outpost of Crusader rule in 1291. The Mamluks, continuing the policy of the Ayyubids, made the strategic decision to destroy the coastal area and to bring desolation to many of its cities, from Tyre in the north to Gaza in the south. Ports were destroyed and various materials were dumped to make them inoperable. The goal was to prevent attacks from the sea, given the fear of the return of the Crusaders. This had a long-term effect on those areas, which remained sparsely populated for centuries. The activity in that time concentrated more inland. With the 1492 expulsion of Jews from Spain and 1497 persecution of Jews and Muslims by Manuel I of Portugal, many Jews moved eastward, with some deciding to settle in the Mamluk Palestine. As a consequence, the local Jewish community underwent significant rejuvenation. The influx of Sephardic Jews began under Mamluk rule in the 15th century, and continued throughout the 16th century and especially after the Ottoman conquest. As city-dwellers, the majority of Sephardic Jews preferred to settle in urban areas, mainly in Safed but also in Jerusalem, while the Musta'arbi community comprised the majority of the villagers' Jews. Ottoman period Under the Mamluks, the area was a province of Bilad a-Sham (Syria). It was conquered by Turkish Sultan Selim I in 1516–17, becoming a part of the province of Ottoman Syria for the next four centuries, first as the Damascus Eyalet and later as the Syria Vilayet (following the Tanzimat reorganization of 1864). With the more favorable conditions that followed the Ottoman conquest, the immigration of Jews fleeing Catholic Europe, which had already begun under Mamluk rule, continued, and soon an influx of exiled Sephardic Jews came to dominate the Jewish community in the area. In 1558, Selim II (1566–1574), successor to Suleiman, whose wife Nurbanu Sultan was Jewish, gave control of Tiberias to Doña Gracia Mendes Nasi, one of the richest women in Europe and an escapee from the Inquisition. She encouraged Jewish refugees to settle in the area and established a Hebrew printing press. Safed became a centre for study of the Kabbalah and other Jewish religious studies, culminating with Joseph Karo's writing of the Shulchan Aruch – published in 1565 in Venice – which became the near-universal standard of Jewish religious law. Doña Nasi's nephew, Joseph Nasi, was made governor of Tiberias and he encouraged Jewish settlement from Italy. In 1660, a Druze power struggle led to the destruction of Safed and Tiberias. In the late 18th century a local Arab sheikh, Zahir al-Umar, created a de facto independent Emirate in the Galilee. Ottoman attempts to subdue the Sheikh failed, but after Zahir's death the Ottomans restored their rule in the area. In 1799, Napoleon briefly occupied the country and planned a proclamation inviting Jews to create a state. The proclamation was shelved following his defeat at Acre. In 1831, Muhammad Ali of Egypt, an Ottoman ruler who left the Empire and tried to modernize Egypt, conquered Ottoman Syria and imposed conscription, leading to the Arab revolt. In 1838, there was another Druze revolt. In 1839 Moses Montefiore met with Muhammed Pasha in Egypt and signed an agreement to establish 100–200 Jewish villages in the Damascus Eyalet of Ottoman Syria, but in 1840 the Egyptians withdrew before the deal was implemented, returning the area to Ottoman governorship. In 1844, Jews constituted the largest population group in Jerusalem. By 1896 Jews constituted an absolute majority in Jerusalem, but the overall population in Palestine was 88% Muslim and 9% Christian. Between 1882 and 1903, approximately 35,000 Jews moved to Palestine, known as the First Aliyah. In the Russian Empire, Jews faced growing persecution and legal restrictions. Half the world's Jews lived in the Russian Empire, where they were restricted to living in the Pale of Settlement. Severe pogroms in the early 1880s and legal repression led to 2 million Jews emigrating from the Russian Empire. 1.5 million went to the United States. Popular destinations were also Germany, France, the United Kingdom, the Netherlands, Argentina and Palestine. The Zionist movement began in earnest in 1882 with Leon Pinsker's pamphlet Auto-Emancipation, which argued for the creation of a Jewish national homeland as a means to avoid the violence plaguing Jewish communities in Eastern Europe. At the 1884 Katowice Conference, Russian Jews established the Bilu and Hovevei Zion ("Lovers of Zion") movements with the aim of settling in Palestine. In 1878, Russian Jewish emigrants established the village of Petah Tikva ("The Beginning of Hope"), followed by Rishon LeZion ("First to Zion") in 1882. The existing Ashkenazi communities were concentrated in the Four Holy Cities, extremely poor and relied on donations (halukka) from groups abroad, while the new settlements were small farming communities, but still relied on funding by the French Baron, Edmond James de Rothschild, who sought to establish profitable enterprises. Many early migrants could not find work and left, but despite the problems, more settlements arose and the community grew. After the Ottoman conquest of Yemen in 1881, a large number of Yemenite Jews also emigrated to Palestine, often driven by Messianism. In 1896 Theodor Herzl published Der Judenstaat (The Jewish State), in which he asserted that the solution to growing antisemitism in Europe (the so-called "Jewish Question") was to establish a Jewish state. In 1897, the World Zionist Organization was founded and the First Zionist Congress proclaimed its aim "to establish a home for the Jewish people in Palestine secured under public law." The Congress chose Hatikvah ("The Hope") as its anthem. Between 1904 and 1914, around 40,000 Jews settled in the area now known as Israel (the Second Aliyah). In 1908, the World Zionist Organization set up the Palestine Bureau (also known as the "Eretz Israel Office") in Jaffa and began to adopt a systematic Jewish settlement policy. In 1909, residents of Jaffa bought land outside the city walls and built the first entirely Hebrew-speaking town, Ahuzat Bayit (later renamed Tel Aviv). In 1915–1916, Talaat Pasha of the Young Turks forced around a million Armenian Christians from their homes in Eastern Turkey, marching them south through Syria, in what is now known as the Armenian genocide. The number of dead is thought to be around 700,000. Hundreds of thousands were forcibly converted to Islam. A community of survivors settled in Jerusalem, one of whom developed the now iconic Armenian pottery. During World War I, most Jews supported the Germans because they were fighting the Russians who were regarded as the Jews' main enemy. In Britain, the government sought Jewish support for the war effort for a variety of reasons including an antisemitic perception of "Jewish power" in the Ottoman Empire's Young Turks movement which was based in Thessaloniki, the most Jewish city in Europe (40% of the 160,000 population were Jewish). The British also hoped to secure American Jewish support for US intervention on Britain's behalf. There was already sympathy for the aims of Zionism in the British government, including the Prime Minister Lloyd George. Over 14,000 Jews were expelled by the Ottoman military commander from the Jaffa area in 1914–1915, due to suspicions they were subjects of Russia, an enemy, or Zionists wishing to detach Palestine from the Ottoman Empire, and when the entire population, including Muslims, of both Jaffa and Tel Aviv was subject to an expulsion order in April 1917, the affected Jews could not return until the British conquest ended in 1918, which drove the Turks out of Southern Syria. A year prior, in 1917, the British foreign minister, Arthur Balfour, sent a public letter to the British Lord Rothschild, a leading member of his party and leader of the Jewish community. The letter subsequently became known as the Balfour Declaration. It stated that the British Government "view[ed] with favour the establishment in Palestine of a national home for the Jewish people". The declaration provided the British government with a pretext for claiming and governing the country. New Middle Eastern boundaries were decided by an agreement between British and French bureaucrats. A Jewish Legion composed largely of Zionist volunteers organized by Ze'ev Jabotinsky and Joseph Trumpeldor participated in the British invasion. It also participated in the failed Gallipoli Campaign. The Nili Zionist spy network provided the British with details of Ottoman plans and troop concentrations. The Ottoman Empire chose to ally itself with Germany when the first war began. Arab leaders dreamed of freeing themselves from Ottoman rule and establishing self-government or forming an independent Arab state. Therefore, Britain contacted Hussein bin Ali of the Kingdom of Hejaz and proposed cooperation. Together they organized the Arab revolt that Britain supplied with very large quantities of rifles and ammunition. In cooperation between British artillery and Arab infantry, the city of Aqaba on the Red Sea was conquered. The Arab army then continued north while Britain attacked the ottomans from the sea. In 1917–1918, Jerusalem and Damascus were conquered from the ottomans. Britain then broke off cooperation with the Arab army. It turned out that Britain had already entered into the secret Sykes–Picot Agreement that meant that only Britain and France would be allowed to administer the land conquered from the Ottoman Empire. After pushing out the Ottomans, Palestine came under martial law. The British, French and Arab Occupied Enemy Territory Administration governed the area shortly before the armistice with the Ottomans until the promulgation of the mandate in 1920. Mandatory Palestine The British Mandate (in effect, British rule) of Palestine, including the Balfour Declaration, was confirmed by the League of Nations in 1922 and came into effect in 1923. The territory of Transjordan was also covered by the Mandate but under separate rules that excluded it from the Balfour Declaration. Britain signed a treaty with the United States (which did not join the League of Nations) in which the United States endorsed the terms of the Mandate, which was approved unanimously by both the U.S. Senate and House of Representatives. The Balfour declaration was published on the 2nd of November 1917 and the Bolsheviks seized control of Russia a week later. This led to civil war in the Russian Empire. Between 1918 and 1921, a series of pogroms led to the death of at least 100,000 Jews (mainly in what is now Ukraine), and the displacement as refugees of a further 600,000. This led to further migration to Palestine. Between 1919 and 1923, some 40,000 Jews arrived in Palestine in what is known as the Third Aliyah. Many of the Jewish immigrants of this period were Socialist Zionists and supported the Bolsheviks. The migrants became known as pioneers (halutzim), experienced or trained in agriculture who established self-sustaining communes called kibbutzim. Malarial marshes in the Jezreel Valley and Hefer Plain were drained and converted to agricultural use. Land was bought by the Jewish National Fund, a Zionist charity that collected money abroad for that purpose. After the French victory over the Arab Kingdom of Syria ended hopes of Arab independence, there were clashes between Arabs and Jews in Jerusalem during the 1920 Nebi Musa riots and in Jaffa the following year, leading to the establishment of the Haganah underground Jewish militia. A Jewish Agency was created which issued the entry permits granted by the British and distributed funds donated by Jews abroad. Between 1924 and 1929, over 80,000 Jews arrived in the Fourth Aliyah, fleeing antisemitism and heavy tax burdens imposed on trade in Poland and Hungary, inspired by Zionism and motivated by the closure of United States borders by the Immigration Act of 1924 which severely limited immigration from Eastern and Southern Europe. Pinhas Rutenberg, a former Commissar of St Petersburg in Russia's pre-Bolshevik Kerensky Government, built the first electricity generators in Palestine. In 1925, the Jewish Agency established the Hebrew University in Jerusalem and the Technion (technological university) in Haifa. British authorities introduced the Palestine pound (worth 1000 "mils") in 1927, replacing the Egyptian pound as the unit of currency in the Mandate. From 1928, the democratically elected Va'ad Leumi (Jewish National Council or JNC) became the main administrative institution of the Palestine Jewish community (Yishuv) and included non-Zionist Jews. As the Yishuv grew, the JNC adopted more government-type functions, such as education, health care, and security. With British permission, the Va'ad Leumi raised its own taxes and ran independent services for the Jewish population. In 1929, tensions grew over the Kotel (Wailing Wall), the holiest spot in the world for modern Judaism,[citation needed] which was then a narrow alleyway where the British banned Jews from using chairs or curtains: Many of the worshippers were elderly and needed seats; they also wanted to separate women from men. The Mufti of Jerusalem said it was Muslim property and deliberately had cattle driven through the alley.[citation needed] He alleged that the Jews were seeking control of the Temple Mount. This provided the spark for the August 1929 Palestine riots. The main victims were the (non-Zionist) ancient Jewish community at Hebron, who were massacred. The riots led to right-wing Zionists establishing their own militia in 1931, the Irgun Tzvai Leumi (National Military Organization, known in Hebrew by its acronym "Etzel"), which was committed to a more aggressive policy towards the Arab population. During the interwar period, the perception grew that there was an irreconciliable tension between the two Mandatory functions, of providing for a Jewish homeland in Palestine, and the goal of preparing the country for self-determination. The British rejected the principle of majority rule or any other measure that would give the Arab population, who formed the majority of the population, control over Palestinian territory. Between 1929 and 1938, 250,000 Jews arrived in Palestine (Fifth Aliyah). In 1933, the Jewish Agency and the Nazis negotiated the Ha'avara Agreement (transfer agreement), under which 50,000 German Jews would be transferred to Palestine. The Jews' possessions were confiscated and in return the Nazis allowed the Ha'avara organization to purchase 14 million pounds worth of German goods for export to Palestine and use it to compensate the immigrants. Although many Jews wanted to leave Nazi Germany, the Nazis prevented Jews from taking any money and restricted them to two suitcases so few could pay the British entry tax.[citation needed] The agreement was controversial and the Labour Zionist leader who negotiated the agreement, Haim Arlosoroff, was assassinated in Tel Aviv in 1933. The assassination was used by the British to create tension between the Zionist left and the Zionist right.[citation needed] Arlosoroff had been the boyfriend of Magda Ritschel some years before she married Joseph Goebbels. There has been speculation that he was assassinated by the Nazis to hide the connection but there is no evidence for it. Between 1933 and 1936, 174,000 arrived despite the large sums the British demanded for immigration permits: Jews had to prove they had 1,000 pounds for families with capital (equivalent to £85,824 in 2023), 500 pounds if they had a profession and 250 pounds if they were skilled labourers.[better source needed] Jewish immigration and Nazi propaganda contributed to the large-scale 1936–1939 Arab revolt in Palestine, a largely nationalist uprising directed at ending British rule. The head of the Jewish Agency, Ben-Gurion, responded to the Arab Revolt with a policy of "Havlagah"—self-restraint and a refusal to be provoked by Arab attacks in order to prevent polarization. The Etzel group broke off from the Haganah in opposition to this policy. The British responded to the revolt with the Peel Commission (1936–37), a public inquiry that recommended that an exclusively Jewish territory be created in the Galilee and western coast (including the population transfer of 225,000 Arabs); the rest becoming an exclusively Arab area. The two main Jewish leaders, Chaim Weizmann and David Ben-Gurion, had convinced the Zionist Congress to approve equivocally the Peel recommendations as a basis for more negotiation. The plan was rejected outright by the Palestinian Arab leadership and they renewed the revolt, which caused the British to abandon the plan as unworkable. Testifying before the Peel Commission, Weizmann said "There are in Europe 6,000,000 people ... for whom the world is divided into places where they cannot live and places where they cannot enter." In 1938, the US called an international conference to address the question of the vast numbers of Jews trying to escape Europe. Britain made its attendance contingent on Palestine being kept out of the discussion. No Jewish representatives were invited. The Nazis proposed their own solution: that the Jews of Europe be shipped to Madagascar (the Madagascar Plan). The agreement proved fruitless, and the Jews were stuck in Europe. With millions of Jews trying to leave Europe and every country closed to Jewish migration, the British decided to close Palestine. The White Paper of 1939, recommended that an independent Palestine, governed jointly by Arabs and Jews, be established within 10 years. The White Paper agreed to allow 75,000 Jewish immigrants into Palestine over the period 1940–44, after which migration would require Arab approval. Both the Arab and Jewish leadership rejected the White Paper. In March 1940 the British High Commissioner for Palestine issued an edict banning Jews from purchasing land in 95% of Palestine. Jews now resorted to illegal immigration: (Aliyah Bet or "Ha'apalah"), often organized by the Mossad Le'aliyah Bet and the Irgun. With no outside help and no countries ready to admit them, very few Jews managed to escape Europe between 1939 and 1945. Those caught by the British were mostly imprisoned in Mauritius. During the Second World War, the Jewish Agency worked to establish a Jewish army that would fight alongside the British forces. Churchill supported the plan but British military and government opposition led to its rejection. The British demanded that the number of Jewish recruits match the number of Arab recruits. In June 1940, Italy declared war on the British Commonwealth and sided with Germany. Within a month, Italian planes bombed Tel Aviv and Haifa, inflicting multiple casualties. In May 1941, the Palmach was established to defend the Yishuv against the planned Axis invasion through North Africa. The British refusal to provide arms to the Jews, even when Rommel's forces were advancing through Egypt in June 1942 (intent on occupying Palestine), and the 1939 White Paper led to the emergence of a Zionist leadership in Palestine that believed conflict with Britain was inevitable. Despite this, the Jewish Agency called on Palestine's Jewish youth to volunteer for the British Army. 30,000 Palestinian Jews and 12,000 Palestinian Arabs enlisted in the British armed forces during the war. In June 1944 the British agreed to create a Jewish Brigade that would fight in Italy. Approximately 1.5 million Jews around the world served in every branch of the allied armies, mainly in the Soviet and US armies. 200,000 Jews died serving in the Soviet army alone. A small group (about 200 activists), dedicated to resisting the British administration in Palestine, broke away from the Etzel (which advocated support for Britain during the war) and formed the "Lehi" (Stern Gang), led by Avraham Stern. In 1942, the USSR released the Revisionist Zionist leader Menachem Begin from the Gulag and he went to Palestine, taking command of the Etzel organization with a policy of increased conflict against the British. At about the same time Yitzhak Shamir escaped from the camp in Eritrea where the British were holding Lehi activists without trial, taking command of the Lehi (Stern Gang). Jews in the Middle East were also affected by the war. Most of North Africa came under Nazi control and many Jews were used as slaves. The 1941 pro-Axis coup in Iraq was accompanied by massacres of Jews. The Jewish Agency put together plans for a last stand in the event of Rommel invading Palestine (the Nazis planned to exterminate Palestine's Jews). Between 1939 and 1945, the Nazis, aided by local forces, led systematic efforts to kill every person of Jewish extraction in Europe (The Holocaust), causing the deaths of approximately 6 million Jews. A quarter of those killed were children. The Polish and German Jewish communities, which played an important role in defining the pre-1945 Jewish world, mostly ceased to exist. In the United States and Palestine, Jews of European origin became disconnected from their families and roots. As the Holocaust mainly affected Ashkenazi Jews, Sepharadi and Mizrahi Jews, who had been a minority, became a much more significant factor in the Jewish world. Those Jews who survived in central Europe, were displaced persons (refugees); an Anglo-American Committee of Inquiry, established to examine the Palestine issue, surveyed their ambitions and found that over 95% wanted to migrate to Palestine. In the Zionist movement the moderate Pro-British (and British citizen) Weizmann, whose son died flying in the RAF, was undermined by Britain's anti-Zionist policies. Leadership of the movement passed to the Jewish Agency in Palestine, now led by the anti-British Socialist-Zionist party (Mapai) led by David Ben-Gurion. The British Empire was severely weakened by the war. In the Middle East, the war had made Britain conscious of its dependence on Arab oil. Shortly after VE Day, the Labour Party won the general election in Britain. Although Labour Party conferences had for years called for the establishment of a Jewish state in Palestine, the Labour government now decided to maintain the 1939 White Paper policies. Illegal migration (Aliyah Bet) became the main form of Jewish entry into Palestine. Across Europe Bricha ("flight"), an organization of former partisans and ghetto fighters, smuggled Holocaust survivors from Eastern Europe to Mediterranean ports, where small boats tried to breach the British blockade of Palestine. Meanwhile, Jews from Arab countries began moving into Palestine overland. Despite British efforts to curb immigration, during the 14 years of the Aliyah Bet, over 110,000 Jews entered Palestine. By the end of World War II, the Jewish population of Palestine had increased to 33% of the total population. In an effort to win independence, Zionists now waged a guerrilla war against the British. The main underground Jewish militia, the Haganah, formed an alliance called the Jewish Resistance Movement with the Etzel and Stern Gang to fight the British. In June 1946, following instances of Jewish sabotage, such as in the Night of the Bridges, the British launched Operation Agatha, arresting 2,700 Jews, including the leadership of the Jewish Agency, whose headquarters were raided. Those arrested were held without trial. On 4 July 1946 a massive pogrom in Poland led to a wave of Holocaust survivors fleeing Europe for Palestine. Three weeks later, Irgun bombed the British Military Headquarters of the King David Hotel in Jerusalem, killing 91 people. In the days following the bombing, Tel Aviv was placed under curfew and over 120,000 Jews, nearly 20% of the Jewish population of Palestine, were questioned by the police. In the US, Congress criticized British handling of the situation and considered delaying loans that were vital to British post-war recovery. The alliance between Haganah and Etzel was dissolved after the King David bombings. Between 1945 and 1948, 100,000–120,000 Jews left Poland. Their departure was largely organized by Zionist activists under the umbrella of the semi-clandestine organization Berihah ("Flight"). Berihah was also responsible for the organized emigration of Jews from Romania, Hungary, Czechoslovakia and Yugoslavia, totalling 250,000 (including Poland) Holocaust survivors. The British imprisoned the Jews trying to enter Palestine in the Atlit detainee camp and Cyprus internment camps. Those held were mainly Holocaust survivors, including large numbers of children and orphans. In response to Cypriot fears that the Jews would never leave and because the 75,000 quota established by the 1939 White Paper had never been filled, the British allowed the refugees to enter Palestine at a rate of 750 per month. On 2 April 1947, the United Kingdom requested that the question of Palestine be handled by the General Assembly. The General Assembly created a committee, United Nations Special Committee on Palestine (UNSCOP), to report on "the question of Palestine". In July 1947 the UNSCOP visited Palestine and met with Jewish and Zionist delegations. The Arab Higher Committee boycotted the meetings. During the visit the British Foreign Secretary Ernest Bevin ordered that passengers from an Aliyah Bet ship, SS Exodus 1947, be sent back to Europe. The Holocaust surviving migrants on the ship were forcibly removed by British troops at Hamburg, Germany. The principal non-Zionist Orthodox Jewish (or Haredi) party, Agudat Israel, recommended to UNSCOP that a Jewish state be set up after reaching a religious status quo agreement with Ben-Gurion. The agreement granted an exemption from military service to a quota of yeshiva (religious seminary) students and to all Orthodox women, made the Sabbath the national weekend, guaranteed kosher food in government institutions and allowed Orthodox Jews to maintain a separate education system. The majority report of UNSCOP proposed "an independent Arab State, an independent Jewish State, and the City of Jerusalem", the last to be under "an International Trusteeship System". On 29 November 1947, in Resolution 181 (II), the General Assembly adopted the majority report of UNSCOP, but with slight modifications. The Plan also called for the British to allow "substantial" Jewish migration by 1 February 1948. Neither Britain nor the UN Security Council took any action to implement the recommendation made by the resolution and Britain continued detaining Jews attempting to enter Palestine. Concerned that partition would severely damage Anglo-Arab relations, Britain denied UN representatives access to Palestine during the period between the adoption of Resolution 181 (II) and the termination of the British Mandate. The British withdrawal was completed in May 1948. However, Britain continued to hold Jewish immigrants of "fighting age" and their families on Cyprus until March 1949. The General Assembly's vote caused joy in the Jewish community and anger in the Arab community. Violence broke out between the sides, escalating into civil war. From January 1948, operations became increasingly militarized, with the intervention of a number of Arab Liberation Army regiments inside Palestine, each active in a variety of distinct sectors around the different coastal towns. They consolidated their presence in Galilee and Samaria. Abd al-Qadir al-Husayni came from Egypt with several hundred men of the Army of the Holy War. Having recruited a few thousand volunteers, he organized the blockade of the 100,000 Jewish residents of Jerusalem. The Yishuv tried to supply the city using convoys of up to 100 armoured vehicles, but largely failed. By March, almost all Haganah's armoured vehicles had been destroyed, the blockade was in full operation, and hundreds of Haganah members who had tried to bring supplies into the city were killed. Up to 100,000 Arabs, from the urban upper and middle classes in Haifa, Jaffa and Jerusalem, or Jewish-dominated areas, evacuated abroad or to Arab centres eastwards. This situation caused the US to withdraw their support for the Partition plan, thus encouraging the Arab League to believe that the Palestinian Arabs, reinforced by the Arab Liberation Army, could put an end to the plan for partition. The British, on the other hand, decided on 7 February 1948 to support the annexation of the Arab part of Palestine by Transjordan. The Jordanian army was commanded by the British. David Ben-Gurion reorganized the Haganah and made conscription obligatory. Every Jewish man and woman in the country had to receive military training. Thanks to funds raised by Golda Meir from sympathisers in the United States, and Stalin's decision to support the Zionist cause, the Jewish representatives of Palestine were able to purchase important arms in Eastern Europe. Ben-Gurion gave Yigael Yadin the responsibility to plan for the announced intervention of the Arab states. The result of his analysis was Plan Dalet, in which Haganah passed from the defensive to the offensive. The plan sought to establish Jewish territorial continuity by conquering mixed zones. Tiberias, Haifa, Safed, Beisan, Jaffa and Acre fell, resulting in the flight of more than 250,000 Palestinian Arabs. On 14 May 1948, on the day the last British forces left Haifa, the Jewish People's Council gathered at the Tel Aviv Museum and proclaimed the establishment of a Jewish state, to be known as the State of Israel. State of Israel In 1948, following the 1947–1948 war in Mandatory Palestine, the Israeli Declaration of Independence sparked the 1948 Arab–Israeli War. This resulted in the 1948 Palestinian expulsion and flight from the land that the State of Israel came to control, and led to waves of Jewish immigration from other parts of the Middle East. The latter half of the 20th century saw further conflicts between Israel and its neighbouring Arab nations. In 1967, the Six-Day War erupted; in its aftermath, Israel captured and occupied the Golan Heights from Syria, the West Bank from Jordan, and the Gaza Strip and the Sinai Peninsula from Egypt. In 1973, the Yom Kippur War began with an attack by Egypt on the Israeli-occupied Sinai Peninsula. In 1979, the Egypt–Israel peace treaty was signed, based on the Camp David Accords. In 1993, Israel signed the Oslo I Accord with the Palestine Liberation Organization, which was followed by the establishment of the Palestinian National Authority. In 1994, the Israel–Jordan peace treaty was signed. Despite efforts to finalize the peace agreement, the conflict continues. Demographics See also Notes References Further reading External links Israeli settlementsTimeline, International law West BankJudea and Samaria Area Gaza StripHof Aza Regional Council
========================================
[SOURCE: https://www.theverge.com/youtube] | [TOKENS: 1483]
YouTube YouTube launched in 2005 as a video sharing platform, and was acquired by Google (now Alphabet) in 2006. It has built an entire community of creators that run channels dedicated to topics like gaming, tech reviews, and beauty. It also houses news videos and entertainment such as music videos, movie trailers, and clips from late-night TV shows. YouTube’s rapid growth has not been without problems. YouTubers typically make money from ads that run in front of their videos, but if they break the platform’s rules, their channels and videos can be demonetized. Executives and moderators have worked to combat harassment, misinformation, terrorist propaganda, hate content, and other abuse. The Verge runs two YouTube channels, The Verge and Verge Science. YouTube is starting to test its conversational AI tool with a “small group of users” on smart TVs, gaming consoles, and streaming devices. The tool, first introduced in 2023, lets you ask questions about the videos you’re watching. [YouTube Help] A partial YouTube outage knocked out access to Google’s video service on Tuesday night. The outage appears to have started just before 8PM ET, but at least on the homepage, it appears to be resolved now. A note on YouTube’s support page says it went down due to problems with the recommendations system. “The issue with our recommendations system has been resolved and all of our platforms (YouTube.com, the YouTube app, YouTube Music, Kids, and TV) are back to normal! Update: The service is back online. Just after we entered the courtroom, we learned that a juror has been hospitalized. The parties decided to postpone today’s testimony from former Meta employees to see if the juror can return. Regardless, Meta CEO Mark Zuckerberg is expected to testify tomorrow — either before the original juror, or an alternate. Similar to the Prompted Playlists that Spotify launched in December, YouTube Music premium subscribers on iOS and Android can now use voice or text descriptions to turn ideas, genres, or vibes into personalized playlists. That’s how much Disney says ESPN and its sports business lost in income during the 15-day YouTube TV blackout late last year, totalling $110 million. That’s almost double the daily hit analysts had estimated at the time, and it’s just for sports, not counting any hit to ABC or Disney’s entertainment channels. [Disney] It follows Snap in reaching an agreement to resolve the first of several cases slated to go to trial this year about social media’s alleged harm to users, an attorney for the 19-year-old plaintiff confirmed. That leaves Meta and YouTube as defendants in the case going to jury selection today. 2026 is the year of social media’s legal reckoning Last week, it was reported that on videos uploaded with the SRV3 (aka YTT) format, which allows for styling captions with things like bolding and custom colors, the captions wouldn’t appear. But YouTube says it has “temporarily limited” uploading SRV/YTT captions “as they may cause video playback to fail for some viewers.” [YouTube Help] Rather than science and the modern world, the new channel looks back and tells stories from our past. The first episode is the fascinating tale of Zheng Yi Sao, the Pirate Queen of China. The art style is also quite different, more painterly. You can watch the first episode below. Videos that include topics like reproductive rights, self-harm, suicide, and abuse will be able to earn more revenue now as long as they don’t include graphic scenes or descriptions, as spotted by Tubefilter. Previously, videos that even mentioned potentially controversial topics would often be demonetized. A content deal has reportedly been struck that will see the BBC producing original shows for YouTube, giving the UK broadcaster access to younger audiences and advertising revenue overseas — something it desperately needs to bolster ad-free funding at home. The deal could be announced next week. [Financial Times] We’ve got something special for you today. It’s my friend Hank Green, longtime internet creator, science educator, and viral TikTok star, interviewing Dropout CEO Sam Reich, now in full video on our Decoder YouTube channel. Hank did this episode as a guest host last summer while I was out with our new baby, and it’s a fan favorite, bringing together two internet personalities that’ve known each other for a very long time and who have a lot of inside knowledge about how the internet, Hollywood, and entertainment all intertwine. We think it’s one of the best episodes of Decoder we put out last year, and it’s honestly just a really fun conversation. Here’s the full transcript in case you want to read, rather than watch, the interview. On Thursday night, the White House Live News section featured YouTuber @RealMattMoney, for reasons that remain unclear. According to Bloomberg, “The White House is aware of the incident and looking into the matter, a White House official said on the condition of anonymity.” The Screen Culture and KH Studio YouTube pages have suddenly disappeared, taking their fake clips with them, reports Deadline. An earlier Deadline investigation showed how they operated, mixing official movie footage with AI-generated images, which some movie studios were profiting from by claiming the ad revenue they brought in. YouTube spokesperson Jack Malon provided this statement to The Verge: After their initial suspension, these channels made the necessary corrections in order to be readmitted into the YouTube Partner Program. However, once monetizing again, they reverted to clear violations of our spam and misleading metadata policies, and as a result, they have been terminated from the platform. US-based car reviewers are going gaga over Chinese EVs. Their audiences wonder why they can’t buy them. The show, Outside Tonight, will be hosted by Recess Therapy host Julian Shapiro-Barnum. “Set in public parks and on street corners, Outside Tonight with Julian Shapiro-Barnum revitalizes a classic format with weekly, live shows packed with star studded interviews, audience driven games, live music, and nonstop comedy,” YouTube says. [blog.youtube] During an earnings call, Disney CEO Bob Iger said the company isn’t “trying to break any new ground” on a deal that would end the ESPN blackout: The deal that we have proposed is equal to or better than what other large distributors have already agreed to... While we’ve been working tirelessly to close this deal and restore our channels to the platform, it’s also imperative that we make sure that we agree to a deal that reflects the value that we deliver. The Athletic reports that Disney and Google’s CEOs have become “more involved” in the negotiations nearly two weeks after ESPN, ABC, and other Disney-owned channels went dark on YouTube TV. Sources tell the outlet that YouTube TV is still trying to determine how much it should pay for Disney’s non-sports networks like Freeform, FX, and National Geographic. [The Athletic] Ahead of one of the biggest games of the NFL season so far, I have a feeing there might be something he wants to talk about. He’ll be on the show at 8PM ET. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://www.theverge.com/creators] | [TOKENS: 1378]
Creators YouTube, Instagram, SoundCloud, and other online platforms are changing the way people create and consume media. The Verge’s Creators section covers the people using these platforms, what they’re making, and how those platforms are changing (for better and worse) in response to the vloggers, influencers, podcasters, photographers, musicians, educators, designers, and more who are using them. The Verge’s Creators section also looks at the way creators are able to turn their projects into careers — from Patreons and merch sales, to ads and Kickstarters — and the ways they’re forced to adapt to changing circumstances as platforms crack down on bad actors and respond to pressure from users and advertisers. New platforms are constantly emerging, and existing ones are ever-changing — what creators have to do to succeed is always going to look different from one year to the next. Its AI disclosures are inconsistent at best. One night in the audience of Netflix’s most ambitious live show yet. Latest In Creators The roughly $1.2 billion deal, expected to close in the second quarter of 2026, gives eBay a social, creator-driven secondhand fashion marketplace popular with Gen Z and younger millennials. The platform feels more like scrolling Instagram outfits than browsing listings, which could help modernize eBay’s image. A partial YouTube outage knocked out access to Google’s video service on Tuesday night. The outage appears to have started just before 8PM ET, but at least on the homepage, it appears to be resolved now. A note on YouTube’s support page says it went down due to problems with the recommendations system. “The issue with our recommendations system has been resolved and all of our platforms (YouTube.com, the YouTube app, YouTube Music, Kids, and TV) are back to normal! Update: The service is back online. This project from Florida-based artist Gwak needs to make that horrid beep/buzzing noise when the phone rings for authenticity, but the yellowing and half-peeled sticker is -chef’s kiss-. A new Sensor Tower report suggests the USDS takeover managed to retain most of its users despite a bumpy start and concerns with the new owners: The average number of TikTok’s daily active users in the US remains around 95% of its usership compared to the week of Jan. 19-25. Kamala Harris’ campaign account, @KamalaHQ, has rebranded as a digital rapid response operation. Meet Mary, the stop-motion 3D witch from Portsmith The platform isn’t the first service to try them. Redfin is doing a geoguessing-themed game of skill to give away a million-dollar house in its app, based on clues found in its Super Bowl ad, and Rainbolt is part of the promo — but he’s not allowed to help, based on the rules here. Meanwhile, Salesforce’s Mr. Beast ad promises a million-dollar giveaway based on the clues in its 30-second ad. Why you can’t label your way into consensus reality amid the AI deepfake apocalypse. We can confirm that basic features in the US do seem to work reliably now on the new Trump-friendly entity’s servers, even if the algorithm still seems a little wonky at times. Per TikTok USDS Joint Venture LLC: We have successfully restored TikTok back to normal after a significant outage caused by winter weather took down a primary US data center site operated by Oracle. The winter storm led to a power outage which caused network and storage issues at the site and impacted tens of thousands of servers that help keep TikTok running in the US. Great, but how many of ya’ll have abandoned the platform? Nick Shirley and others like him are reminiscent of yellow journalism of the 19th century, updated and turbocharged by social media algorithms. Without explaining why its cloud status page doesn’t list any outages, TikTok USDS part-owner Oracle outed itself as the previously unnamed “US data center partner” that experienced a power outage over the weekend, blocking videos from publishing and unraveling its all-important algorithm for a few days. Michael Egbert, Oracle Spokesperson: Over the weekend, an Oracle data center experienced a temporary weather-related power outage which impacted TikTok. The challenges U.S. TikTok users may be experiencing are the result of technical issues that followed the power outage, which Oracle and TikTok are working to quickly resolve. It follows Snap in reaching an agreement to resolve the first of several cases slated to go to trial this year about social media’s alleged harm to users, an attorney for the 19-year-old plaintiff confirmed. That leaves Meta and YouTube as defendants in the case going to jury selection today. 2026 is the year of social media’s legal reckoning TikTok’s US service crashed early Sunday morning, and as of late Monday night, it still hasn’t fully recovered. After finally announcing the problem started with a power outage at an unnamed partner’s data center, TikTok USDS followed up with an updated statement saying, “While the network has been recovered, the outage caused a cascading systems failure that we’ve been working to resolve together with our data center partner,” and listing some of the bugs users are experiencing. There’s still no ETA for a full fix. Despite claims floating around social media, the truth is a bit more complicated, not least by the fact that TikTok in the US is still largely down, about a day and a half after its data center power outage problems started. While tweets from random users, the governor of California, and PopBase claimed TikTok US DMs now censor “Epstein,” testing it from our end showed that its messaging feature bans many innocuous single-word messages, like “test.” Using the convicted sex offender’s name in a sentence, however, goes through unbanned. Oracle and a group of investors now control TikTok in the US, and promise to retrain the app’s algorithm on US user data. Though TikTok blames some of the issues users experienced over the weekend on a power outage, many are still concerned about how their feed could change. Whether this is just a regular outage or a result of this week’s changes in management, reports tracked on Downdetector and Reddit confirm many people are having trouble loading TikTok right now. If the mobile app loads, it’s not consistently showing comments or other features, and the algorithm managing the For You page doesn’t feel like it’s working correctly. Update, January 26th: TikTok is still having problems in the US, which it says are connected to a data center power outage. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================