text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Workplace_(software)] | [TOKENS: 348] |
Contents Workplace (software) Workplace is an online collaborative software tool developed by Meta Platforms. Its features include instant messaging, file sharing, and video and audio conferencing. It was announced on 14 May 2024 that Meta would be sunsetting Workplace in 2026. History Workplace was first announced on January 14, 2015. It launched in beta as Facebook at Work before officially launching in October 2016.[better source needed] Free accounts were discontinued as of February 2021.[citation needed] Workplace for Good was launched in June 2018 to provide a free version of Workplace for registered non-profits and staff of educational institutions. In October 2018, Workplace announced the launch of Safety Check. Originally developed by Facebook engineers after the 2011 Japanese tsunami,[citation needed] Safety Check for Workplace allows organizations to identify employees who may be affected by a crisis and send and monitor responses. Safety Check has been used by Delta Air Lines (during Hurricane Florence and the Hokkaido earthquake). As of May 2021, there were 7 million paying users. In May 2024, Meta announced that it would be phasing out Workplace and shutting it down completely in 2026. Functions While Workplace accounts are set up and run separately from Facebook accounts, Workplace uses much of Facebook's underlying technology. Workplace uses machine learning to rank information in a user's news feed and make recommendations, while 'downranking' less relevant information. Workplace data is stored across 12 Facebook-owned and operated data centers. Reception PC Mag described it as "Facebook if your employer restricted membership to only your coworkers and bosses" and concluded "Workplace is what an organization makes of it". See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-Williams_2016_78-0] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-147] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zhongyuan] | [TOKENS: 1461] |
Contents Zhongyuan Zhongyuan (Chinese: 中原; pinyin: Zhōngyuán), the Central Plain(s), also known as Zhongtu (Chinese: 中土; pinyin: Zhōngtǔ, lit. 'central land') and Zhongzhou (Chinese: 中州; pinyin: Zhōngzhōu, lit. 'central region'), commonly refers to the part of the North China Plain surrounding the lower and middle reaches of the Yellow River, centered on the region between Luoyang and Kaifeng. It has been perceived as the birthplace of the Chinese civilization. Historically, the Huaxia people viewed Zhongyuan as 'the center of the world'. Human activities in the Zhongyuan region can be traced back to the Palaeolithic period. In prehistoric times, Huaxia, a confederation of tribes that later developed into the Han ethnicity, lived along the middle and lower reaches of the Yellow River. The term 'Zhongguo' (Central State) was used to distinguish themselves from the Siyi tribes that were perceived as 'barbaric'. For a large part of Chinese history, Zhongyuan had been the political, economic, and cultural center of the Chinese civilization, as over 20 dynasties had located their capitals in this region. In the modern concept, the term 'Central Plains Region' is used to define the Zhongyuan area. In a narrow sense, it refers to the present-day Henan Province in the central part of China. A broader interpretation of the Central Plains' measure would also include Henan's neighborhood provinces, Shaanxi, Hebei, Shanxi, and Shandong, as well as the northern part of Anhui and the northwestern part of Jiangsu. Geography The north, west, and south sides of Zhongyuan are encircled by mountains, predominantly the Taihang Mountains from the northwestern side, Funiu and Xionger Mountains to the west. The central and eastern areas of Zhongyuan form part of the North China Plain. The Yellow River flows through the region from west to east. The Huai River and Hai River, as well as Tributaries of the Yangtze River, also pass through Zhongyuan. Since ancient times, Zhongyuan has been a strategically important site of China, regarded as 'The center and hub of the world'. The alluvial deposits of the Yellow River formed the vast plains of Zhongyuan in the Palaeozoic period. The region has sufficient water resources for plant growth, making it the center of the Chinese agrarian civilization, known as the 'Breadbasket of China'. Zhongyuan has a temperate monsoon climate with distinct seasons. It is usually hot and humid during the summer, cold and dry in the winter. Definition The concept of Zhongyuan had often been changing in different historical periods, under different contexts. The term Zhongyuan first appeared in the Classic of Poetry not specifying any exact geographic locations. It was during the Spring and Autumn period (770–476 BC) that the word came to denote the Central Plains region. Only until the Northern and Southern dynasties (420–589 AD) onward, the term 'Zhongyuan' were widely accepted as a geographical concept. The geographical view of Zhongyuan may depict different regions. It usually refers to the middle and lower reaches of the Yellow River, but sometimes also encompasses the reaches along the Yangtze and Huai River, and even the whole North China Plain. Apart from being a geographical location, the term 'Zhongyuan' is also used as a historical and cultural concept that represents the dominance of the Han ethnicity. In older texts, 'Zhongguo' (中國; 'Middle Kingdom') is used as a synonym for the Central Plains. Today, 'Zhongguo' refers to the country of China. Other archaic synonyms include 'Zhongtu' (中土; 'Middle Earth') and 'Zhongzhou' (中州; 'Middle Province').[a] History The history of Zhongyuan can be dated back to prehistoric times. There were traces of human activities in Zhongyuan about half a million years ago. Archaeological studies have shown that as far back as 80,000 to 100,000 years ago, the ancient people of Zhongyuan were using stone tools. The excavation of painted pottery and stone tools found from relics of Yangshao culture (5000 to 3000 BC) and Longshan culture (3000 to 1900 BC) prove that Zhongyuan was in the forefront of Chinese civilization throughout the Stone Age. After the rise of Erlitou culture (1900 to 1500 BC), Zhongyuan entered the Bronze Age. The emergence of private ownership and social classes led to the formation of the first dynasty in Chinese history, the Xia dynasty. The Xia dynasty established its regime centered on Zhongyuan, setting the tone for later dynasties to make Zhongyuan the central region. From the rise of the Xia dynasty (c. 2070–c. 1600 BC) to the fall of the Song dynasty (960–1279), most of the legitimate dynasties established their capitals within the Zhongyuan area, except for Eastern Jin and the Southern Song. It was not until the Yuan (1271–1368) and Ming dynasties (1368–1644) that the political center of China re-located, as the Mongol Empire established the Yuan dynasty in Dadu (Khanbaliq, now Beijing). Later, Yongle Emperor of the Ming dynasty moved the capital at his power base in Beijing. Language Central Plains Mandarin (or Zhongyuan Mandarin) is the major language and native tongue spoken in the Zhongyuan region. It is a variety of Mandarin Chinese, formed and developed gradually based on the standard pronunciations of Mandarin and its predecessor, Yayan. In the Yuan dynasty, the rime book Zhongyuan Yinyun (Rhymes of the Central Plains) written by Zhou Deqing reflected the standard pronunciation of Early Mandarin. Some linguists argue that the Early Mandarin recorded in Zhongyuan Yinyun was based on the pronunciation standards derived from the Luoyang and Bianliang dialects of Zhongyuan, which had been prevalent in the Song dynasty. In modern China, Central Plains Mandarin is mainly used in Henan, Shandong, Anhui, Shanxi, Shaanxi, Gansu, and Hebei. The population of native Central Plains Mandarin speakers is approximately 124 million. In terms of tone, the key characteristics of modern Central Plains Mandarin are: The neutral and aspirated voiced initial consonants of entering tone in Early Mandarin are now pronounced as the first tone (high tone), and voiced initial consonants of entering tone in Early Mandarin are now pronounced as the second tone (rising tone). See also Notes References |
======================================== |
[SOURCE: https://www.theverge.com/tech/879864/samsung-ai-generated-edited-video-ads-slop] | [TOKENS: 1569] |
TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsSamsung is slopping AI ads all over its social channelsIts AI disclosures are inconsistent at best.Its AI disclosures are inconsistent at best.by Jess WeatherbedCloseJess WeatherbedNews ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess WeatherbedFeb 17, 2026, 10:55 AM UTCLinkShareGiftEven if the AI wasn’t being disclosed, its usage in these videos isn’t exactly subtle. Image: Samsung / The VergeJess WeatherbedCloseJess WeatherbedPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.After cramming AI into every inch of its smartphones, appliances, and other devices, Samsung is now increasingly slapping it across social media. Several videos posted to the company’s YouTube, Instagram, and TikTok accounts in recent weeks were made or edited using generative AI tools, including its latest teaser for the upcoming Galaxy S26 series.The “Brighten your after hours” video features two people skateboarding at night, and supposedly shows off the low-light video capabilities of the upcoming Galaxy S26 devices. Fine print appears at the bottom of the screen toward the end of the video, flagging that it was “generated with the assistance of AI tools,” but there are obvious signs even without the disclosure. The vegetable-laden shopping bags look artificial and unnaturally weighted, for example, and cobblestones in the road appear to shift around.This video, and several others that use AI to promote camera features, all carry the tagline “Can your phone do that?” but don’t specify if Samsung is using its own phones or AI models to generate the content. It wouldn’t be the first time that the company has misrepresented its smartphone camera capabilities in marketing materials. Samsung has also been posting low-quality cartoons (which look suspiciously Disney-styled) to promote AI home appliances, cat edits, and snowmen who are ironically struggling to determine what’s real — all made or manipulated using AI.Most include a similar AI disclosure in the clips, but YouTube and Instagram notably haven’t added their own AI labels on the “Brighten your after hours” video, despite Google, Meta, and Samsung all having adopted C2PA — the authenticity standard used by most AI labeling systems. We have reached out to Samsung, Meta, and Google for comment.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Jess WeatherbedCloseJess WeatherbedNews ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess WeatherbedAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AICreatorsCloseCreatorsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All CreatorsNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsSamsungCloseSamsungPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All SamsungTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All News Samsung is slopping AI ads all over its social channels Its AI disclosures are inconsistent at best. Its AI disclosures are inconsistent at best. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed After cramming AI into every inch of its smartphones, appliances, and other devices, Samsung is now increasingly slapping it across social media. Several videos posted to the company’s YouTube, Instagram, and TikTok accounts in recent weeks were made or edited using generative AI tools, including its latest teaser for the upcoming Galaxy S26 series. The “Brighten your after hours” video features two people skateboarding at night, and supposedly shows off the low-light video capabilities of the upcoming Galaxy S26 devices. Fine print appears at the bottom of the screen toward the end of the video, flagging that it was “generated with the assistance of AI tools,” but there are obvious signs even without the disclosure. The vegetable-laden shopping bags look artificial and unnaturally weighted, for example, and cobblestones in the road appear to shift around. This video, and several others that use AI to promote camera features, all carry the tagline “Can your phone do that?” but don’t specify if Samsung is using its own phones or AI models to generate the content. It wouldn’t be the first time that the company has misrepresented its smartphone camera capabilities in marketing materials. Samsung has also been posting low-quality cartoons (which look suspiciously Disney-styled) to promote AI home appliances, cat edits, and snowmen who are ironically struggling to determine what’s real — all made or manipulated using AI. Most include a similar AI disclosure in the clips, but YouTube and Instagram notably haven’t added their own AI labels on the “Brighten your after hours” video, despite Google, Meta, and Samsung all having adopted C2PA — the authenticity standard used by most AI labeling systems. We have reached out to Samsung, Meta, and Google for comment. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Creators Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Samsung Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Tech This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hebrew_calendar] | [TOKENS: 12722] |
Contents Hebrew calendar Hebrew Judeo-Aramaic Judeo-Arabic Other Jewish diaspora languages Jewish folklore Jewish poetry The Hebrew calendar (Hebrew: הַלּוּחַ הָעִבְרִי, romanized: HaLuakh ha'Ivri, IPA: [ha‿ˈluaχ ha‿ʔivˈʁi]), also called the Jewish calendar, is a lunisolar calendar used today for Jewish religious observance and as an official calendar of Israel. It determines the dates of Jewish holidays and other rituals, such as yahrzeits and the schedule of public Torah readings. In Israel, it is used for religious purposes, provides a time frame for agriculture, and is an official calendar for civil holidays alongside the Gregorian calendar. Like other lunisolar calendars, the Hebrew calendar consists of months of 29 or 30 days which begin and end at approximately the time of the new moon. As 12 such months comprise a total of just 354 days, an extra lunar month is added every 2 or 3 years so that the long-term average year length closely approximates the actual length of the solar year. Originally, the beginning of each month was determined based on physical observation of a new moon, while the decision of whether to add the leap month was based on observation of natural agriculture-related events in ancient Israel. Between the years 70 and 1178, these empirical criteria were gradually replaced with a set of mathematical rules. Month length now follows a fixed schedule which is adjusted based on the molad interval (a mathematical approximation of the mean time between new moons) and several other rules, while leap months are now added in 7 out of every 19 years according to the Metonic cycle. Nowadays, Hebrew years are generally counted according to the system of Anno Mundi (Latin: "in the year of the world"; Hebrew: מבריאת העולם, "from the creation of the world", abbreviated AM) according to traditional Jewish interpretation of the chronology of the Hebrew Bible. This system attempts to calculate the number of years since the creation of the world according to the Genesis creation narrative and subsequent Biblical stories. The current Hebrew year, AM 5786, began at sunset on 22 September 2025 and will end at sunset on 11 September 2026.[a] Components Based on the classic rabbinic interpretation of Genesis 1:5 ("There was evening and there was morning, one day"), a day in the rabbinic Hebrew calendar runs from sunset (the start of "the evening") to the next sunset. Similarly, Yom Kippur, Passover, and Shabbat are described in the Bible as lasting "from evening to evening". The days are therefore figured locally. Halachically, the exact time when days begin or end is uncertain: this time could be either sundown (shekiah) or else nightfall (tzait ha'kochavim, "when the stars appear"). The time between sundown and nightfall (bein hashmashot) is of uncertain status. Thus (for example) observance of Shabbat begins before sundown on Friday and ends after nightfall on Saturday, to be sure that Shabbat is not violated no matter when the transition between days occurs. Instead of the International Date Line convention, there are varying opinions as to where the day changes. (See International date line in Judaism.) Judaism uses multiple systems for dividing hours. In one system, the 24-hour day is divided into fixed hours equal to 1⁄24 of a day, while each hour is divided into 1080 halakim (parts, singular: helek). A helek is 3+1⁄3 seconds (1⁄18 minute). The ultimate ancestor of the helek was a Babylonian unit of time called a barleycorn, equal to 1⁄72 of a Babylonian time degree (1° of celestial rotation). These measures are not generally used for everyday purposes; their best-known use is for calculating and announcing the molad. In another system, the daytime period is divided into 12 relative hours (sha'ah z'manit, also sometimes called "halachic hours"). A relative hour is defined as 1⁄12 of the time from sunrise to sunset, or dawn to dusk, as per the two opinions in this regard. Therefore, an hour can be less than 60 minutes in winter, and more than 60 minutes in summer; similarly, the 6th hour ends at solar noon, which generally differs from 12:00. Relative hours are used for the calculation of prayer times (zmanim); for example, the Shema must be recited in the first three relative hours of the day. Neither system is commonly used in ordinary life; rather, the local civil clock is used. This is even the case for ritual times (e.g. "The latest time to recite Shema today is 9:38 AM"). The Hebrew week (שבוע, shavua) is a cycle of seven days, mirroring the seven-day period of the Book of Genesis in which the world is created. The names for the days of the week are simply the day number within the week. The week begins with Day 1 (Sunday) and ends with Shabbat (Saturday). (More precisely, since days begin in the evening, weeks begin and end on Saturday evening. Day 1 lasts from Saturday evening to Sunday evening, while Shabbat lasts from Friday evening to Saturday evening.) Since some calculations use division, a remainder of 0 signifies Saturday. In Hebrew, these names may be abbreviated using the numerical value of the Hebrew letters, for example יום א׳ (Day 1, or Yom Rishon (יום ראשון)): The names of the days of the week are modeled on the seven days mentioned in the Genesis creation account. For example, Genesis 1:8 "... And there was evening and there was morning, a second day" corresponds to Yom Sheni meaning "second day". (However, for days 1, 6, and 7 the modern name differs slightly from the version in Genesis.) The seventh day, Shabbat, as its Hebrew name indicates, is a day of rest in Judaism. In Talmudic Hebrew, the word Shabbat (שַׁבָּת) can also mean "week", so that in ritual liturgy a phrase like "Yom Reviʻi beShabbat" means "the fourth day in the week". Jewish holidays can only fall on the weekdays shown in the following table: The period from 1 Adar (or Adar II, in leap years) to 29 Marcheshvan contains all of the festivals specified in the Bible (Purim, Passover, Shavuot, Rosh Hashanah, Yom Kippur, Sukkot, and Shemini Atzeret). The lengths of months in this period are fixed, meaning that the day of week of Passover dictates the day of week of the other Biblical holidays. However, the lengths of the months of Marcheshvan and Kislev can each vary by a day (due to the Rosh Hashanah postponement rules which are used to adjust the year length). As a result, the holidays falling after Marcheshvan (starting with Chanukah) can fall on multiple days for a given row of the table. A common mnemonic is "לא אד"ו ראש, ולא בד"ו פסח", meaning: "Rosh HaShana cannot be on Sunday, Wednesday or Friday, and Passover cannot be on Monday, Wednesday or Friday" with each day's numerical equivalent, in gematria, is used, such that א' = 1 = Sunday, and so forth. From this rule, every other date can be calculated by adding weeks and days until that date's possible day of the week can be derived. The Hebrew calendar is a lunisolar calendar, meaning that months are based on lunar months, but years are based on solar years.[b] The calendar year features twelve lunar months of 29 or 30 days, with an additional lunar month ("leap month") added periodically to synchronize the twelve lunar cycles with the longer solar year. These extra months are added in seven years (3, 6, 8, 11, 14, 17, and 19) out of a 19-year cycle, known as the Metonic cycle (See Leap months, below). The beginning of each Jewish lunar month is based on the appearance of the new moon. Although originally the new lunar crescent had to be observed and certified by witnesses (as is still done in Karaite Judaism and Islam), nowadays Jewish months have generally fixed lengths which approximate the period between new moons. For these reasons, a given month does not always begin on the same day as its astronomical conjunction. The mean period of the lunar month (precisely, the synodic month) is very close to 29.5 days. Accordingly, the basic Hebrew calendar year is one of twelve lunar months alternating between 29 and 30 days: Thus, the year normally contains twelve months with a total of 354 days. In such a year, the month of Marcheshvan has 29 days and Kislev has 30 days. However, due to the Rosh Hashanah postponement rules, in some years Kislev may lose a day to have 29 days, or Marcheshvan may acquire an additional day to have 30 days. Normally the 12th month is named Adar. During leap years, the 12th and 13th months are named Adar I and Adar II (Hebrew: Adar Aleph and Adar Bet—"first Adar" and "second adar"). Sources disagree as to which of these months is the "real" Adar, and which is the added leap month. The Bible does not directly mention the addition of leap months (also known as "embolismic" or "intercalary" months). The insertion of the leap month is based on the requirement that Passover occur at the same time of year as the spring barley harvest (aviv). (Since 12 lunar months make up less than a solar year, the date of Passover would gradually move throughout the solar year if leap months were not occasionally added.) According to the rabbinic calculation, this requirement means that Passover (or at least most of Passover) should fall after the March equinox. Similarly, the holidays of Shavuot and Sukkot are presumed by the Torah to fall in specific agricultural seasons. Maimonides, discussing the calendrical rules in his Mishneh Torah (1178), notes: By how much does the solar year exceed the lunar year? By approximately 11 days. Therefore, whenever this excess accumulates to about 30 days, or a little more or less, one month is added and the particular year is made to consist of 13 months, and this is the so-called embolismic (intercalated) year. For the year could not consist of twelve months plus so-and-so many days, since it is said: "throughout the months of the year", which implies that we should count the year by months and not by days. The Hebrew calendar year conventionally begins on Rosh Hashanah, the first day of Tishrei. However, the Jewish calendar also defines several additional new years, used for different purposes. The use of multiple starting dates for a year is comparable to different starting dates for civil "calendar years", "tax or fiscal years", "academic years", and so on. The Mishnah (c. 200 CE) identifies four new-year dates: The 1st of Nisan is the new year for kings and festivals. The 1st of Elul is the new year for the cattle tithe, Rabbi Eliezer and Rabbi Shimon say on the first of Tishrei. The 1st of Tishri is the new year for years, of the Shmita and Jubilee years, for planting and for vegetables. The 1st of Shevat is the new year for trees—so the school of Shammai, but the school of Hillel say: On the 15th thereof. Two of these dates are especially prominent: For the dates of the Jewish New Year see Jewish and Israeli holidays 2000–2050. The Jewish year number is generally given by Anno Mundi (from Latin "in the year of the world", often abbreviated AM or A.M.). In this calendar era, the year number equals the number of years that have passed since the creation of the world, according to an interpretation of Biblical accounts of the creation and subsequent history. From the eleventh century, anno mundi dating became the dominant method of counting years throughout most of the world's Jewish communities, replacing earlier systems such as the Seleucid era. As with Anno Domini (A.D. or AD), the words or abbreviation for Anno Mundi (A.M. or AM) for the era should properly precede the date rather than follow it. The reference junction of the Sun and the Moon (Molad 1) is considered to be at 5 hours and 204 halakim, or 11:11:20 p.m., on the evening of Sunday, 6 October 3761 BCE. According to rabbinic reckoning, this moment was not Creation, but about one year "before" Creation, with the new moon of its first month (Tishrei) called molad tohu (the mean new moon of chaos or nothing). It is about one year before the traditional Jewish date of Creation on 25 Elul AM 1,[d] based upon the Seder Olam Rabbah.[e] Thus, adding 3760 before Rosh Hashanah or 3761 after to a Julian calendar year number starting from 1 CE will yield the Hebrew year. For earlier years there may be a discrepancy; see Missing years (Jewish calendar). In Hebrew there are two common ways of writing the year number: with the thousands, called לפרט גדול ("major era"), and without the thousands, called לפרט קטן ("minor era"). Thus, the current year is written as ה'תשפ"ו (5786) using the "major era" and תשפ"ו (786) using the "minor era". Since the Jewish calendar has been fixed, leap months have been added according to the Metonic cycle of 19 years, of which 12 are common (non-leap) years of 12 months, and 7 are leap years of 13 months. This 19-year cycle is known in Hebrew as the Machzor Katan ("small cycle"). Because the Julian years are 365+1⁄4 days long, every 28 years the weekday pattern repeats. This is called the sun cycle, or the Machzor Gadol ("great cycle") in Hebrew. The beginning of this cycle is arbitrary. Its main use is for determining the time of Birkat Hachama. Because every 50 years is a Jubilee year, there is a jubilee (yovel) cycle. Because every seven years is a sabbatical year, there is a seven-year release cycle. The placement of these cycles is debated. Historically, there is enough evidence to fix the sabbatical years in the Second Temple Period. But it may not match with the sabbatical cycle derived from the biblical period; and there is no consensus on whether the Jubilee year is the fiftieth year or the latter half of the forty ninth year. Every 247 years, or 13 cycles of 19 years, form a period known as an iggul, or the Iggul of Rabbi Nahshon. This period is notable in that the precise details of the calendar almost always (but not always) repeat over this period. This occurs because the molad interval (the average length of a Hebrew month) is 29.530594 days, which over 247 years results in a total of 90215.965 days. This is almost exactly 90216 days – a whole number and multiple of 7 (equalling the days of the week). So over 247 years, not only does the 19-year leap year cycle repeat itself, but the days of the week (and thus the days of Rosh Hashanah and the year length) typically repeat themselves. Calculations To determine whether a Jewish year is a leap year, one must find its position in the 19-year Metonic cycle. This position is calculated by dividing the Jewish year number by 19 and finding the remainder. (Since there is no year 0, a remainder of 0 indicates that the year is year 19 of the cycle.) For example, the Jewish year 5786 divided by 19 results in a remainder of 10, indicating that it is year 10 of the Metonic cycle. The Jewish year used is the anno mundi year, in which the year of creation according to the Rabbinical Chronology (3761 BCE) is taken as year 1. Years 3, 6, 8, 11, 14, 17, and 19 of the Metonic cycle are leap years. The Hebrew mnemonic GUCHADZaT גוחאדז״ט refers to these years,[f] while another memory aid refers to musical notation.[g] Whether a year is a leap year can also be determined by a simple calculation (which also gives the fraction of a month by which the calendar is behind the seasons, useful for agricultural purposes). To determine whether year n of the calendar is a leap year, find the remainder on dividing [(7 × n) + 1] by 19. If the remainder is 6 or less it is a leap year; if it is 7 or more it is not. For example, the remainder on dividing [(7 × 5786) + 1] by 19 is 14, so the year 5786 is not a leap year. The remainder on dividing [(7 × 5787) + 1] by 19 is 2, so the year 5787 is a leap year. This works because as there are seven leap years in nineteen years the difference between the solar and lunar years increases by 7⁄19 month per year. When the difference goes above 18⁄19 month this signifies a leap year, and the difference is reduced by one month. The Hebrew calendar assumes that a month is uniformly of the length of an average synodic month, taken as exactly 29+13753⁄25920 days (about 29.530594 days, which is less than half a second from the modern scientific estimate); it also assumes that a tropical year is exactly 12+7⁄19 times that, i.e., about 365.2468 days. Thus it overestimates the length of the tropical year (365.2422 days) by 0.0046 days (about 7 minutes) per year, or about one day in 216 years. This error is less than the Julian years (365.2500 days) make (0.0078 days/year, or one day in 128 years), but much more than what the Gregorian years (365.2425 days/year) make (0.0003 days/year, or one day in 3333 years). Besides the adding of leap months, the year length is sometimes adjusted by adding one day to the month of Marcheshvan, or removing one day from the month of Kislev. Because each calendar year begins with Rosh Hashanah, adjusting the year length is equivalent to moving the day of the next Rosh Hashanah. Several rules are used to determine when this is performed. To calculate the day on which Rosh Hashanah of a given year will fall, the expected molad (moment of lunar conjunction or new moon) of Tishrei in that year is calculated. The molad is calculated by multiplying the number of months that will have elapsed since some (preceding) molad (whose weekday is known) by the mean length of a (synodic) lunar month, which is 29 days, 12 hours, and 793 parts (there are 1080 "parts" in an hour, so that one part is equal to 3+1⁄3 seconds). The very first molad, the molad tohu, fell on Sunday evening at 11:11:20 pm in the local time of Jerusalem,[h] 6 October 3761 BCE (Proleptic Julian calendar) 20:50:23.1 UTC, or in Jewish terms Day 2, 5 hours, and 204 parts. The exact time of a molad in terms of days after midnight between 29 and 30 December 1899 (the form used by many spreadsheets for date and time) is where N is the number of lunar months since the beginning. (N = 71440 for the beginning of the 305th Machzor Katan on 1 October 2016.) Adding 0.25 to this converts it to the Jewish system in which the day begins at 6 pm. In calculating the number of months since the known molad that one uses as the starting point, one must include any leap months that fall within the elapsed interval, according to the cycle of leap years. A 19-year cycle of 235 synodic months has 991 weeks 2 days 16 hours 595 parts, a common year of 12 synodic months has 50 weeks 4 days 8 hours 876 parts, while a leap year of 13 synodic months has 54 weeks 5 days 21 hours 589 parts. Four conditions are considered to determine whether the date of Rosh Hashanah must be postponed. These are called the Rosh Hashanah postponement rules, or deḥiyyot. The two most important conditions are: This rule is applied for religious reasons, so that Yom Kippur does not fall on a Friday or Sunday, and Hoshana Rabbah does not fall on Shabbat.[i] Since Shabbat restrictions also apply to Yom Kippur, if either day falls immediately before the other, it would not be possible to make necessary preparations for the second day (such as candle lighting).[j] Additionally, the laws of Shabbat override those of Hoshana Rabbah, so that if Hoshana Rabbah were to fall on Shabbat, the Hoshana Rabbah aravah ritual could not be performed. Thus Rosh Hashanah can only fall on Monday, Tuesday, Thursday, and Saturday. The kevi'ah uses the letters ה ,ג ,ב and ז (representing 2, 3, 5, and 7, for Monday, Tuesday, Thursday, and Saturday) to denote the starting day of Rosh Hashana and the year. Two other rules are applied much less frequently and serve to prevent impermissible year lengths. Their names are Hebrew acronyms that refer to the ways they are calculated: The rules of postponement of Rosh HaShanah make it that a Jewish common year will have 353, 354, or 355 days while a leap year (with the addition of Adar I which always has 30 days) has 383, 384, or 385 days. Whether a year is deficient, regular, or complete is determined by the time between two adjacent Rosh Hashanah observances and the leap year. A Metonic cycle equates to 235 lunar months in each 19-year cycle. This gives an average of 6,939 days, 16 hours, and 595 parts for each cycle. But due to the Rosh Hashanah postponement rules (preceding section) a cycle of 19 Jewish years can be either 6,939, 6,940, 6,941, or 6,942 days in duration. For any given year in the Metonic cycle, the molad moves forward in the week by 2 days, 16 hours, and 595 parts every 19 years. The greatest common divisor of this and a week is 5 parts, so the Jewish calendar repeats exactly following a number of Metonic cycles equal to the number of parts in a week divided by 5, namely 7×24×216 = 36,288 Metonic cycles, or 689,472 Jewish years. There is a near-repetition every 247 years, except for an excess of 50 minutes 16+2⁄3 seconds (905 parts). Contrary to popular impression, one's Hebrew birthday does not necessarily fall on the same Gregorian date every 19 years, since the length of the Metonic cycle varies by several days (as does the length of a 19-year Gregorian period, depending whether it contains 4 or 5 leap years). There are three qualities that distinguish one year from another: whether it is a leap year or a common year; on which of four permissible days of the week the year begins; and whether it is a deficient, regular, or complete year. Mathematically, there are 24 (2×4×3) possible combinations, but only 14 of them are valid. Each of these patterns is known by a kevi'ah (Hebrew: קביעה for 'a setting' or 'an established thing'), which is a code consisting of two numbers and a letter. In English, the code consists of the following: The kevi'ah in Hebrew letters is written right-to-left, so their days of the week are reversed, the right number for 1 Tishrei and the left for 15 Nisan. The kevi'ah also determines the Torah reading cycle (which parshiyot are read together or separately. The kevi'ah, and thus the annual calendar, of a numbered Hebrew year can be determined by consulting the table of Four Gates, whose inputs are the year's position in the 19-year cycle and its molad Tishrei.: 150–152 [k] In this table, the years of a 19-year cycle are organized into four groups (called "gates"): common years after a leap year but before a common year (1 4 9 12 15); common years between two leap years (7 18); common years after a common year but before a leap year (2 5 10 13 16); and leap years (3 6 8 11 14 17 19). This table: 150 : 183 numbers the days of the week and hours for the limits of molad Tishrei in the Hebrew manner for calendrical calculations, that is, both begin at 6 pm, thus 7d 18h 0p is noon Saturday, with the week starting on 1d 0h 0p (Saturday 6pm, i.e. the beginning of Sunday reckoned in the Hebrew manner). The oldest surviving table of Four Gates was written by Muhammad ibn Musa al-Khwarizmi in 824. Comparing the days of the week of molad Tishrei with those in the kevi'ah shows that during 39% of years 1 Tishrei is not postponed beyond the day of the week of its molad Tishrei, 47% are postponed one day, and 14% are postponed two days. This table also identifies the seven types of common years and seven types of leap years. Most are represented in any 19-year cycle, except one or two may be in neighboring cycles. The most likely type of year is 5R7 in 18.1% of years, whereas the least likely is 5C1 in 3.3% of years. The day of the week of 15 Nisan is later than that of 1 Tishrei by one, two or three days for common years and three, four or five days for leap years in deficient, regular or complete years, respectively. Given the length of the year, the length of each month is fixed as described above, so the real problem in determining the calendar for a year is determining the number of days in the year. In the modern calendar, this is determined in the following manner.[l] The day of Rosh Hashanah and the length of the year are determined by the time and the day of the week of the Tishrei molad, that is, the moment of the average conjunction. Given the Tishrei molad of a certain year, the length of the year is determined as follows: First, one must determine whether each year is an ordinary or leap year by its position in the 19-year Metonic cycle. Years 3, 6, 8, 11, 14, 17, and 19 are leap years. Secondly, one must determine the number of days between the starting Tishrei molad (TM1) and the Tishrei molad of the next year (TM2). For calendar descriptions in general the day begins at 6 pm, but for the purpose of determining Rosh Hashanah, a molad occurring on or after noon is treated as belonging to the next day (the first deḥiyyah).[m] All months are calculated as 29d, 12h, 44m, 3+1⁄3s long (MonLen). Therefore, in an ordinary year TM2 occurs 12 × MonLen days after TM1. This is usually 354 calendar days after TM1, but if TM1 is on or after 3:11:20 am and before noon, it will be 355 days. Similarly, in a leap year, TM2 occurs 13 × MonLen days after TM1. This is usually 384 days after TM1, but if TM1 is on or after noon and before 2:27:16+2⁄3 pm, TM2 will be only 383 days after TM1. In the same way, from TM2 one calculates TM3. Thus the four natural year lengths are 354, 355, 383, and 384 days. However, because of the holiday rules, Rosh Hashanah cannot fall on a Sunday, Wednesday, or Friday, so if TM2 is one of those days, Rosh Hashanah in year 2 is postponed by adding one day to year 1 (the second deḥiyyah). To compensate, one day is subtracted from year 2. It is to allow for these adjustments that the system allows 385-day years (long leap) and 353-day years (short ordinary) besides the four natural year lengths. But how can year 1 be lengthened if it is already a long ordinary year of 355 days or year 2 be shortened if it is a short leap year of 383 days? That is why the third and fourth deḥiyyahs are needed. If year 1 is already a long ordinary year of 355 days, there will be a problem if TM1 is on a Tuesday,[n] as that means TM2 falls on a Sunday and will have to be postponed, creating a 356-day year. In this case, Rosh Hashanah in year 1 is postponed from Tuesday (the third deḥiyyah). As it cannot be postponed to Wednesday, it is postponed to Thursday, and year 1 ends up with 354 days. On the other hand, if year 2 is already a short year of 383 days, there will be a problem if TM2 is on a Wednesday.[o] because Rosh Hashanah in year 2 will have to be postponed from Wednesday to Thursday and this will cause year 2 to be only 382 days long. In this case, year 2 is extended by one day by postponing Rosh Hashanah in year 3 from Monday to Tuesday (the fourth deḥiyyah), and year 2 will have 383 days. For calculated dates of Jewish holidays, see Jewish and Israeli holidays 2000–2050. Accuracy A "new moon" (astronomically called a lunar conjunction and, in Hebrew, a molad) is the moment at which the sun and moon have the same ecliptic longitude (i.e. they are aligned horizontally with respect to a north–south line). The period between two new moons is a synodic month. The actual length of a synodic month varies from about 29 days 6 hours and 30 minutes (29.27 days) to about 29 days and 20 hours (29.83 days), a variation range of about 13 hours and 30 minutes. Accordingly, for convenience, the Hebrew calendar uses a long-term average month length, known as the molad interval, which equals the mean synodic month of ancient times. The molad interval is 29 days, 12 hours, and 793 "parts" (1 "part" = 1/18 minute = 31/3 seconds) (i.e., 29.530594 days), and is the same value determined by the Babylonians in their System B about 300 BCE and was adopted by Hipparchus (2nd century BCE) and by Ptolemy in the Almagest (2nd century CE). Its remarkable accuracy (less than one second from the current true value) is thought to have been achieved using records of lunar eclipses from the 8th to 5th centuries BCE. In the Talmudic era, when the mean synodic month was slightly shorter than at present, the molad interval was even more accurate, being "essentially a perfect fit" for the mean synodic month at the time. Currently, the accumulated drift in the moladot since the Talmudic era has reached a total of approximately 97 minutes. This means that the molad of Tishrei lands one day later than it ought to in (97 minutes) ÷ (1440 minutes per day) = nearly 7% of years. Therefore, the seemingly small drift of the moladot is already significant enough to affect the date of Rosh Hashanah, which then cascades to many other dates in the calendar year, and sometimes (due to the Rosh Hashanah postponement rules) also interacts with the dates of the prior or next year. The rate of calendar drift is increasing with time, since the mean synodic month is progressively shortening due to gravitational tidal effects. Measured on a strictly uniform time scale (such as that provided by an atomic clock) the mean synodic month is becoming gradually longer, but since the tides slow Earth's rotation rate even more, the mean synodic month is becoming gradually shorter in terms of mean solar time. A larger source of error is the inaccuracy of the Metonic cycle. Nineteen Jewish years average 6939d 16h 33m 031⁄3s, compared to the 6939d 14h 26m 15s of nineteen mean solar years. Thus, the Hebrew calendar drifts by just over 2 hours every 19 years, or approximately one day every 216 years. Due to accumulation of this discrepancy, the earliest date on which Passover can fall has drifted by roughly eight days since the 4th century, and the 15th of Nisan now falls only on or after 26 March (the date in 2013), five days after the actual equinox on 21 March. In the distant future, this drift is projected to move Passover much further in the year. If the calendar is not amended, then Passover will start to land on or after the summer solstice around approximately AM 16652 (12892 CE).[p] When the calendar was fixed in the 4th century, the earliest Passover (in year 16 of the Metonic cycle) began on the first full moon after the March equinox.[q] This is still the case in about 80% of years; but, in about 20% of years, Passover is a month late by this criterion.[r] Presently, this occurs after the "premature" insertion of a leap month in years 8, 11, and 19 of each 19-year cycle, which causes Passover to fall especially far after the March equinox in such years. Calendar drift also impacts the observance of Sukkot, which will shift into Israel's winter rainy season, making dwelling in the sukkah less practical. It also affects the logic of the Shemini Atzeret prayer for rain, which will be more often recited once rains are already underway. Modern scholars have debated at which point the drift could become ritually problematic, and proposed adjustments to the fixed calendar to keep Passover in its proper season. The seriousness of the calendar drift is discounted by many, on the grounds that Passover will remain in the spring season for many millennia, and the Torah is generally not interpreted as having specified tight calendrical limits. However, some writers and researchers have proposed "corrected" calendars (with modifications to the leap year cycle, molad interval, or both) which would compensate for these issues: Religious questions abound about how such a system might be implemented and administered throughout the diverse aspects of the world Jewish community. Usage While imprisoned in Auschwitz, Jews made every effort to preserve Jewish tradition in the camps, despite the monumental dangers in doing so. The Hebrew calendar, which is a tradition with great importance to Jewish practice and rituals was particularly dangerous since no tools of telling of time, such as watches and calendars, were permitted in the camps. The keeping of a Hebrew calendar was a rarity amongst prisoners and there are only two known surviving calendars that were made in Auschwitz, both of which were made by women. Before this, the tradition of making a Hebrew calendar was greatly assumed to be the job of a man in Jewish society. Early Zionist pioneers were impressed by the fact that the calendar preserved by Jews over many centuries in far-flung diasporas, as a matter of religious ritual, was geared to the climate of their original country: major Jewish holidays such as Sukkot, Passover, and Shavuot correspond to major points of the country's agricultural year such as planting and harvest. Accordingly, in the early 20th century the Hebrew calendar was re-interpreted as an agricultural rather than religious calendar. After the creation of the State of Israel, the Hebrew calendar became one of the official calendars of Israel, along with the Gregorian calendar. Holidays and commemorations not derived from previous Jewish tradition were to be fixed according to the Hebrew calendar date. For example, the Israeli Independence Day falls on 5 Iyar, Jerusalem Reunification Day on 28 Iyar, Yom HaAliyah on 10 Nisan, and the Holocaust Commemoration Day on 27 Nisan. The Hebrew calendar is still widely acknowledged, appearing in public venues such as banks (where it is legal for use on cheques and other documents), and on the mastheads of newspapers. The Jewish New Year (Rosh Hashanah) is a two-day public holiday in Israel. However, since the 1980s an increasing number of secular Israelis celebrate the Gregorian New Year (usually known as "Silvester Night"—ליל סילבסטר) on the night between 31 December and 1 January. Prominent rabbis have on several occasions sharply denounced this practice, but with no noticeable effect on the secularist celebrants. Wall calendars commonly used in Israel are hybrids. Most are organised according to Gregorian rather than Jewish months, but begin in September, when the Jewish New Year usually falls, and provide the Jewish date in small characters. History Lunisolar calendars similar to the Hebrew calendar, consisting of twelve lunar months plus an occasional 13th intercalary month to synchronize with the solar/agricultural cycle, were used in all ancient Middle Eastern civilizations except Egypt, and likely date to the 3rd millennium BCE. While there is no mention of the intercalary month anywhere in the Hebrew Bible, most Biblical scholars hold that the intercalation process was almost certainly a regularly occurring aspect of the early Hebrew calendar keeping process. The earliest potentially Hebrew calendar found is the Gezer calendar. It is debated among Semiticists whether it is Hebrew, such as "early Samarian Hebrew," or if it is Phoenician. The Gezer calendar is commonly dated to circa 10th century BCE. Biblical references to the pre-exilic calendar include ten of the twelve months identified by number rather than by name. Prior to the Babylonian captivity, the names of only four months are referred to in the Tanakh: Aviv (first month), Ziv (second month), Ethanim (seventh month), and Bul (eighth month). All of these are believed to be Canaanite names. The last three of these names are only mentioned in connection with the building of the First Temple and Håkan Ulfgard suggests that the use of what are rarely used Canaanite (or in the case of Ethanim perhaps Northwest Semitic) names indicates that "the author is consciously utilizing an archaizing terminology, thus giving the impression of an ancient story...". Alternatively, these names may be attributed to the presence of Phoenician scribes in Solomon's court at the time of the building of the Temple. During the Babylonian captivity, the Jewish people adopted the Babylonian names for the months. The Babylonian calendar descended directly from the Sumerian calendar. These Babylonian month-names (such as Nisan, Iyyar, Tammuz, Ab, Elul, Tishri and Adar) are shared with the modern Levantine solar calendar (currently used in the Arabic-speaking countries of the Fertile Crescent) and the modern Assyrian calendar, indicating a common origin. The origin is thought to be the Babylonian calendar. According to some Christian and Karaite sources, the tradition in ancient Israel was that 1 Nisan would not start until the barley is ripe, being the test for the onset of spring.[s] If the barley was not ripe, an intercalary month would be added before Nisan. In the 1st century, Josephus stated that while – Moses...appointed Nisan...as the first month for the festivals...the commencement of the year for everything relating to divine worship, but for selling and buying and other ordinary affairs he preserved the ancient order [i. e. the year beginning with Tishrei]." Edwin Thiele concluded that the ancient northern Kingdom of Israel counted years using the ecclesiastical new year starting on 1 Aviv/Nisan (Nisan-years), while the southern Kingdom of Judah counted years using the civil new year starting on 1 Tishrei (Tishri-years). The practice of the Kingdom of Israel was also that of Babylon, as well as other countries of the region. The practice of Judah is continued in modern Judaism and is celebrated as Rosh Hashana. Before the adoption of the current Anno Mundi year numbering system, other systems were used. In early times, the years were counted from some significant event such as the Exodus. During the period of the monarchy, it was the widespread practice in western Asia to use era year numbers according to the accession year of the monarch of the country involved. This practice was followed by the united kingdom of Israel, kingdom of Judah, kingdom of Israel, Persia, and others. Besides, the author of Kings coordinated dates in the two kingdoms by giving the accession year of a monarch in terms of the year of the monarch of the other kingdom, though some commentators note that these dates do not always synchronise. Other era dating systems have been used at other times. For example, Jewish communities in the Babylonian diaspora counted the years from the first deportation from Israel, that of Jehoiachin in 597 BCE. The era year was then called "year of the captivity of Jehoiachin". During the Hellenistic Maccabean period, Seleucid era counting was used, at least in the Land of Israel (under Greek influence at the time). The Books of the Maccabees used Seleucid era dating exclusively, as did Josephus writing in the Roman period. From the 1st to 10th centuries, the center of world Judaism was in the Middle East (primarily Iraq and Palestine),[t] and Jews in these regions also used Seleucid era dating, which they called the "Era of Contracts [or Documents]"; this counting is still sometimes used by Yemenite Jews. The Talmud states: Rav Aha bar Jacob then put this question: How do we know that our Era [of Documents] is connected with the Kingdom of Greece at all? Why not say that it is reckoned from the Exodus from Egypt, omitting the first thousand years and giving the years of the next thousand? In that case, the document is really post-dated!Said Rav Nahman: In the Diaspora the Greek Era alone is used.He [Rav Aha] thought that Rav Nahman wanted to dispose of him anyhow, but when he went and studied it thoroughly he found that it is indeed taught [in a Baraita]: In the Diaspora the Greek Era alone is used. In the 8th and 9th centuries, as the center of Jewish life moved from Babylonia to Europe, counting using the Seleucid era "became meaningless", and thus was replaced by the anno mundi system. The use of the Seleucid era continued till the 16th century in the East, and was employed even in the 19th century among Yemenite Jews. Occasionally in Talmudic writings, reference was made to other starting points for eras, such as destruction era dating, being the number of years since the 70 CE destruction of the Second Temple. According to Rabbinic Judaism, Exodus 12:1–2 requires that the months be determined by a proper court with the necessary authority to sanctify the months. Hence the court, not the astronomy, has the final decision. When the observational form of the calendar was in use, whether a leap month was added depended on three factors: 'aviv [i.e., the ripeness of barley], fruits of trees, and the equinox. On two of these grounds it should be intercalated, but not on one of them alone. It may be noted that in the Bible the name of the first month, Aviv, literally means "spring". Thus, if Adar was over and spring had not yet arrived, an additional month was observed. The Tanakh contains several commandments related to the keeping of the calendar and the lunar cycle, and records changes that have taken place to the Hebrew calendar. Numbers 10:10 stresses the importance in Israelite religious observance of the new month (Hebrew: ראש חודש, Rosh Chodesh, "beginning of the month"): "... in your new moons, ye shall blow with the trumpets over your burnt-offerings..." Similarly in Numbers 28:11. "The beginning of the month" meant the appearance of a new moon, and in Exodus 12:2. "This month is to you". According to the Mishnah and Tosefta, in the Maccabean, Herodian, and Mishnaic periods, new months were determined by the sighting of a new crescent, with two eyewitnesses required to testify to the Sanhedrin to having seen the new lunar crescent at sunset. The practice in the time of Gamaliel II (c. 100 CE) was for witnesses to select the appearance of the moon from a collection of drawings that depicted the crescent in a variety of orientations, only a few of which could be valid in any given month. These observations were compared against calculations. At first the beginning of each Jewish month was signaled to the communities of Israel and beyond by fires lit on mountaintops, but after the Samaritans began to light false fires, messengers were sent. The inability of the messengers to reach communities outside Israel before mid-month High Holy Days (Succot and Passover) led outlying communities to celebrate scriptural festivals for two days rather than one, observing the second feast-day of the Jewish diaspora because of uncertainty of whether the previous month ended after 29 or 30 days. It has been noted that the procedures described in the Mishnah and Tosefta are all plausible procedures for regulating an empirical lunar calendar. Fire-signals, for example, or smoke-signals, are known from the pre-exilic Lachish ostraca. Furthermore, the Mishnah contains laws that reflect the uncertainties of an empirical calendar. Mishnah Sanhedrin, for example, holds that when one witness holds that an event took place on a certain day of the month, and another that the same event took place on the following day, their testimony can be held to agree, since the length of the preceding month was uncertain. Another Mishnah takes it for granted that it cannot be known in advance whether a year's lease is for twelve or thirteen months. Hence it is a reasonable conclusion that the Mishnaic calendar was actually used in the Mishnaic period. The accuracy of the Mishnah's claim that the Mishnaic calendar was also used in the late Second Temple period is less certain. One scholar has noted that there are no laws from Second Temple period sources that indicate any doubts about the length of a month or of a year. This led him to propose that the priests must have had some form of computed calendar or calendrical rules that allowed them to know in advance whether a month would have 30 or 29 days, and whether a year would have 12 or 13 months. Between 70 and 1178 CE, the observation-based calendar was gradually replaced by a mathematically calculated one. The Talmuds indicate at least the beginnings of a transition from a purely empirical to a computed calendar. Samuel of Nehardea (c. 165–254) stated that he could determine the dates of the holidays by calculation rather than observation. According to a statement attributed to Yose (late 3rd century), Purim could not fall on a Sabbath nor a Monday, lest Yom Kippur fall on a Friday or a Sunday. This indicates that, by the time of the redaction of the Jerusalem Talmud (c. 400 CE), there were a fixed number of days in all months from Adar to Elul, also implying that the extra month was already a second Adar added before the regular Adar. Elsewhere, Shimon ben Pazi is reported to have counseled "those who make the computations" not to set Rosh Hashana or Hoshana Rabbah on Shabbat. This indicates that there was a group who "made computations" and controlled, to some extent, the day of the week on which Rosh Hashana would fall. There is a tradition, first mentioned by Hai Gaon (died 1038 CE), that Hillel II was responsible for the new calculated calendar with a fixed intercalation cycle "in the year 670 of the Seleucid era" (i.e., 358–359 CE). Later writers, such as Nachmanides, explained Hai Gaon's words to mean that the entire computed calendar was due to Hillel II in response to persecution of Jews. Maimonides (12th century) stated that the Mishnaic calendar was used "until the days of Abaye and Rava" (c. 320–350 CE), and that the change came when "the land of Israel was destroyed, and no permanent court was left." Taken together, these two traditions suggest that Hillel II (whom they identify with the mid-4th-century Jewish patriarch Ioulos, attested in a letter of the Emperor Julian, and the Jewish patriarch Ellel, mentioned by Epiphanius) instituted the computed Hebrew calendar because of persecution. H. Graetz linked the introduction of the computed calendar to a sharp repression following a failed Jewish insurrection that occurred during the rule of the Christian emperor Constantius and Gallus. Saul Lieberman argued instead that the introduction of the fixed calendar was due to measures taken by Christian Roman authorities to prevent the Jewish patriarch from sending calendrical messengers. Both the tradition that Hillel II instituted the complete computed calendar, and the theory that the computed calendar was introduced due to repression or persecution, have been questioned. Furthermore, two Jewish dates during post-Talmudic times (specifically in 506 and 776) are impossible under the rules of the modern calendar, indicating that some of its arithmetic rules were established in Babylonia during the times of the Geonim (7th to 8th centuries). Most likely, the procedure established in 359 involved a fixed molad interval slightly different from the current one,[u] Rosh Hashana postponement rules similar but not identical to current rules,[v] and leap months were added based on when Passover preceded a fixed cutoff date rather than through a repeated 19-year cycle. The Rosh Hashana rules apparently reached their modern form between 629 and 648, the modern molad interval was likely fixed in 776, while the fixed 19-year cycle also likely dates to the late 8th century. Except for the epoch year number (the fixed reference point at the beginning of year 1, which at that time was one year later than the epoch of the modern calendar), the calendar rules reached their current form by the beginning of the 9th century, as described by the Persian Muslim astronomer Muhammad ibn Musa al-Khwarizmi in 823. Al-Khwarizmi's study of the Jewish calendar describes the 19-year intercalation cycle, the rules for determining on what day of the week the first day of the month Tishrei shall fall, the interval between the Jewish era (creation of Adam) and the Seleucid era, and the rules for determining the mean longitude of the sun and the moon using the Jewish calendar. Not all the rules were in place by 835. In 921, Aaron ben Meïr had a debate with Saadya Gaon about one of the rules of the calendar. This indicates that the rules of the modern calendar were not so clear and set. In 1000, the Muslim chronologist al-Biruni described all of the modern rules of the Hebrew calendar, except that he specified three different epochs used by various Jewish communities being one, two, or three years later than the modern epoch. In 1178, Maimonides included all the rules for the calculated calendar and their scriptural basis, including the modern epochal year, in his work Mishneh Torah. He wrote that he had chosen the epoch from which calculations of all dates should be as "the third day of Nisan in this present year ... which is the year 4938 of the creation of the world" (22 March 1178). Today, these rules are generally used by Jewish communities throughout the world. Other calendars Outside of Rabbinic Judaism, evidence shows a diversity of practice. Karaites use the lunar month and the solar year, but the Karaite calendar differs from the current Rabbinic calendar in a number of ways. The Karaite calendar is identical to the Rabbinic calendar used before the Sanhedrin changed the Rabbinic calendar from the lunar, observation based, calendar to the current, mathematically based, calendar used in Rabbinic Judaism today. In the lunar Karaite calendar, the beginning of each month, the Rosh Chodesh, can be calculated, but is confirmed by the observation in Israel of the first sightings of the new moon. This may result in an occasional variation of a maximum of one day, depending on the inability to observe the new moon. The day is usually "picked up" in the next month. The addition of the leap month (Adar II) is determined by observing in Israel the ripening of barley at a specific stage (defined by Karaite tradition) (called aviv), rather than using the calculated and fixed calendar of rabbinic Judaism. Occasionally this results in Karaites being one month ahead of other Jews using the calculated rabbinic calendar. The "lost" month would be "picked up" in the next cycle when Karaites would observe a leap month while other Jews would not. Furthermore, the seasonal drift of the rabbinic calendar is avoided, resulting in the years affected by the drift starting one month earlier in the Karaite calendar. Also, the four rules of postponement of the rabbinic calendar are not applied, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. In the Middle Ages many Karaite Jews outside Israel followed the calculated rabbinic calendar, because it was not possible to retrieve accurate aviv barley data from the land of Israel. However, since the establishment of the State of Israel, and especially since the Six-Day War, the Karaite Jews that have made aliyah can now again use the observational calendar. The Samaritan community's calendar also relies on lunar months and solar years. Calculation of the Samaritan calendar has historically been a secret reserved to the priestly family alone, and was based on observations of the new crescent moon. More recently, a 20th-century Samaritan High Priest transferred the calculation to a computer algorithm. The current High Priest confirms the results twice a year, and then distributes calendars to the community. The epoch of the Samaritan calendar is year of the entry of the Children of Israel into the Land of Israel with Joshua. The month of Passover is the first month in the Samaritan calendar, but the year number increments in the sixth month. Like in the Rabbinic calendar, there are seven leap years within each 19-year cycle. However, the Rabbinic and Samaritan calendars' cycles are not synchronized, so Samaritan festivals—notionally the same as the Rabbinic festivals of Torah origin—are frequently one month off from the date according to the Rabbinic calendar. Additionally, as in the Karaite calendar, the Samaritan calendar does not apply the four rules of postponement, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. Many of the Dead Sea Scrolls have references to a unique calendar, used by the people there, who are often assumed to have been Essenes. The year of this calendar used the ideal Mesopotamian calendar of twelve 30-day months, to which were added 4 days at the equinoxes and solstices (cardinal points), making a total of 364 days. With only 364 days, the calendar would be very noticeably different from the actual seasons after a few years, but there is nothing to indicate what was done about this problem. Various scholars have suggested that nothing was done and the calendar was allowed to change with respect to the seasons, or that changes were made irregularly when the seasonal anomaly was too great to be ignored any longer. Calendrical evidence for the postexilic Persian period is found in papyri from the Jewish colony at Elephantine, in Egypt. These documents show that the Jewish community of Elephantine used the Egyptian and Babylonian calendars. The Sardica paschal table shows that the Jewish community of some eastern city, possibly Antioch, used a calendrical scheme that kept Nisan 14 within the limits of the Julian month of March. Some of the dates in the document are clearly corrupt, but they can be emended to make the sixteen years in the table consistent with a regular intercalation scheme. Peter, the bishop of Alexandria (early 4th century CE), mentions that the Jews of his city "hold their Passover according to the course of the moon in the month of Phamenoth, or according to the intercalary month every third year in the month of Pharmuthi", suggesting a fairly consistent intercalation scheme that kept Nisan 14 approximately between Phamenoth 10 (6 March in the 4th century CE) and Pharmuthi 10 (5 April). Jewish funerary inscriptions from Zoar (south of the Dead Sea), dated from the 3rd to the 5th century, indicate that when years were intercalated, the intercalary month was at least sometimes a repeated month of Adar. The inscriptions, however, reveal no clear pattern of regular intercalations, nor do they indicate any consistent rule for determining the start of the lunar month. See also Notes References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Facebook_Credits] | [TOKENS: 636] |
Contents Facebook Credits Facebook Credits was a virtual currency that enabled people to purchase items in games and non-gaming applications on the Facebook Platform. One U.S. dollar was the equivalent of 10 Facebook Credits. Facebook Credits were available in 15 currencies including U.S. dollars, pound sterling, euros, and Danish kroner. Facebook was hoping eventually to expand Credits into a micropayment system open to any Facebook application, whether a game or a media company application. Facebook deprecated Credits in favour of users' local currencies. Facebook Credits went into its alpha stage in May 2009 and progressed into the beta stage in February 2010, which ended in January 2011. At that time, Facebook announced all Facebook game developers would be required to process payments only through Facebook Credits from July 1, 2011. Facebook retains 30% and developers get 70% of all revenue earned through Credits. Credits is a single currency that can be used in multiple games and applications, and its introduction led former PayPal executives to comment on whether or not Credits could soon replace PayPal as the leader in virtual payments. By the end of 2010, it was expected that Facebook users would purchase Credits to pay for the majority of virtual goods sold on the social network. In March 2011, Facebook created an official subsidiary to handle payments: Facebook Payments Inc. In June 2012, Facebook announced it would no longer use its own money system, Facebook Credits. Users with credits will see them converted into their own currencies. Facebook Credits was officially removed from Facebook in September 2013. Using Credits Over 150 developers used Facebook Credits in more than 650 Facebook games and applications, which represented over 70% of virtual goods purchased on Facebook. Developers who offered Facebook Credits include Zynga (FarmVille, FrontierVille), CrowdStar (Happy Aquarium, HelloCity), and PopCap Games (Bejeweled Blitz) as well as Playdom, Playfish, RockYou, and 6waves. In September 2010, it was announced that Facebook Credits would become the exclusive payment method for all games developed by Zynga and hosted on Facebook. Zynga is the number one Facebook application developer and was expected to earn $500 million in 2010 from virtual goods. It was announced in April 2011 that Facebook users would be able to use Credits to purchase vouchers that can be redeemed for real goods and services by using the "Deals" offering. Obtaining Credits In addition to purchasing Credits within Facebook, there were a number of online and offline channels for earning or buying Facebook Credits. These included the following. US Regulation of Facebook Credits and other Virtual Currencies In March 2013, FinCen announced new guidance relating to the regulation of virtual currencies such as Facebook Credits and bitcoin These regulations will have an impact of those who deal in virtual currencies and is seen as FinCen's first step towards regulating virtual currency (as opposed to Fiat money.) As regulation of such currencies expands, there is a possibility that individual U.S. Citizens may be required to report substantial holdings of these currencies on their tax returns. See also References External links |
======================================== |
[SOURCE: https://www.mako.co.il/hix-feed/2026-m02_w03/shorts-d078c59ea4f7c91027.htm] | [TOKENS: 100] |
We are sorry... ...but your activity and behavior on this website made us think that you are a bot. Please solve this CAPTCHA in helping us understand your behavior to grant access You reached this page when trying to access https://www.mako.co.il/hix-feed/2026-m02_w03/shorts-d078c59ea4f7c91027.htm from 79.181.162.231 on February 21 2026, 10:58:03 UTC |
======================================== |
[SOURCE: https://www.mako.co.il/hix-feed/2026-m02_w03/shorts-e058c59ea4f7c91027.htm] | [TOKENS: 100] |
We are sorry... ...but your activity and behavior on this website made us think that you are a bot. Please solve this CAPTCHA in helping us understand your behavior to grant access You reached this page when trying to access https://www.mako.co.il/hix-feed/2026-m02_w03/shorts-e058c59ea4f7c91027.htm from 79.181.162.231 on February 21 2026, 10:58:03 UTC |
======================================== |
[SOURCE: https://www.ynet.co.il/activism/article/hy625sguze#autoplay] | [TOKENS: 301] |
כל ריבוע הוא עולם ומלואו: "אנחנו חברה אחת עם הרבה צבעים" במסגרת חודש "פברואר יוצא מן הכלל", המצוין בכאלף אירועים ב-200 מרכזים קהילתיים ברחבי הארץ, הונף בשפרעם "דגל השוויון", יצירה משותפת של נשות הקהילה, שסרגו מאות חלקים לכדי מארג אחד שייצג את העיר כולה. "הצלחנו לעשות את זה, לא אני לבד, כל הבנות ביחד", אמרה האלה עמורי |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-NASA_Jet_Propulsion_Laboratory_(JPL)_2003_80-0] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ethiopian_Jews] | [TOKENS: 22993] |
Contents Beta Israel The Beta Israel,[b] or Ethiopian Jews,[c] are a Jewish group originating in the Amhara and Tigray regions of northern Ethiopia, where they were historically spread out across more than 500 small villages. The majority were concentrated in what is today North Gondar Zone, Shire Inda Selassie, Wolqayit, Tselemti, Dembia, Segelt, Quara, and Belesa. The community suffered extensive destruction in the 19th century due to the Mahdist War, as well as persecution by the Ethiopian Empire. In response to these pressures, and influenced by the efforts of Protestant and Messianic Jewish missionaries, a portion of the community converted to Christianity. These converts became known as the Falash Mura. The larger Christian Beta Abraham community is considered a crypto-Judaic branch of the Beta Israel community. A large wave of Aliyah from Ethiopia starting in the 1980s brought most Beta Israel to Israel, and several Israeli government initiatives have facilitated their emigration. The Beta Israel community was classed[clarification needed] under the 1950 Law of Return by Yitzhak Rabin[clarification needed] following halakhic and legal debates over their status. Thus, the Israeli government, with support from the United States, began a series of secret airlift operations to bring the Beta Israel to Israel, including Operation Moses (1984), Operation Joshua (1985), and Operation Solomon (1991). The majority of Beta Israel now live in Israel. By the end of 2023, approximately 171,600 Jews of Ethiopian descent were living in Israel, including around 93,600 born in Ethiopia and 78,000 born in Israel to Ethiopian-born parents. Terminology The Beta Israel community, (literally, 'house of Israel' in Ge'ez), has been known by several names throughout its history. The local populace referred them to Esraelawi (Israelites), Yehudi or Ayhud. The name Ayhud (lit. "Jewish people") however, is rarely used in the community, as Ethiopian Christians had used it as a derogatory term. The term has increased in usage in the 20th century as the Beta Israel strengthened its ties with other Jewish communities. Within the community, the term Ibrawi (lit. "Hebrew") was used to refer to the chawa (lit. "free man," see Chewa regiments) in the community, in contrast to the barya (lit. "slave"). The colloquial Ethiopian/Eritrean term Falasha or Felasha, which means "landless", "wanderers", or "exiles", was given to the community in the 15th century by the Emperor Yeshaq I; after they were conquered by the Ethiopian Empire, its use is now considered offensive, though initially it may have not been considered derogatory. The term appears in Jewish texts in the 15th century (Hebrew: פֶלַאשַה). Other lesser used terms by which the community have been known by include Tebiban ("possessor of secret knowledge"). Geography Prior to the large-scale immigration of the community to Israel, they were primarily located in the northwestern regions of Ethiopia, historically known as Abyssinia. The majority of the community lived in areas such as the Semien Mountains, Begemder, Tsegede, Wolqayt, Tselemti, Dembiya, and Segelt. Smaller groups also resided in the Shewa region, including Addis Ababa, and in parts of Eritrea, particularly Asmara. The settlements were situated along the tributaries of the Tekezé River and near Lake Tana, the source of the Blue Nile. This area included the Simien Mountains, which includes Ras Dashen, the highest peak in Ethiopia- reaching an elevation of 4,549 meters. While the high-altitude terrain above 3,000 meters provided strategic advantages for defense and served as natural fortifications, it was difficult for farming due to thin, rocky soil and steep slopes. The mountain air in these highlands, known locally as Dega, is regarded as conducive to good health, longevity, and physical vitality. The communal settlements were typically situated near streams, rivers, and mountainous areas, often at elevations of 2,000 to 3,000 meters above sea level, within a climate zone called Waina Dega (meaning "Vineland" in Amharic). This zone has a temperate climate with average temperatures ranging between 2.5 °C and 18 °C. The region receives significant rainfall, averaging around 1,550 millimeters annually, with two distinct rainy seasons. The area also benefits from fertile soil, making it suitable for agriculture, although the steep, rocky terrain at higher elevations, like those found in the Simien Mountains, can present challenges for farming. Beta Israel communities typically avoided lowland areas (below 2,000 meters), known as Kolla, due to the risks posed by tropical diseases like Malaria, parasites, and dangerous wildlife and the threat posed by potentially hostile neighboring tribes. However, during waves of immigration to Israel, members of the community were forced to cross hazardous lowland regions, this journey entailed significant hardship, and thousands perished, particularly during prolonged periods of waiting in refugee camps located in the Sudanese desert and in Addis Ababa. History The earliest archaeological evidence indicating the presence of Jews in the region comes from two incense burners discovered at Adi Qereh site near the ancient capital Yeha of Dʿmt, inscribed in Sabaic and dated to approximately 600 BCE. These inscriptions refer to "Hebrews" living among the local population. Showing an early Jewish presence in the area, before the destruction of the First temple in Jerusalem. According to the researchers, these hereditary findings are consistent with archaeological findings on the penetration of crops domesticated in the Near East indicating that 2,500 years ago people from the Near East migrated to the Ethiopian Highlands bringing foreign crops such as wheats, barley and lentils. Early Jewish historian, Josephus wrote that the migration of 100,000 Jews to Kush occurred during the reign of Ptolemy I Soter. Support for the large number of Jews who came to the region can be obtained from the words of the Philo, who says that at the time there were 1,000,000 Jews living in Egypt, Libya and Cush. Italian scholar Filosseno Luzzatto cites this in reference to the custom of offering sacrifices among Ethiopian Jews was influenced by the Land of Onias. In 325, Ezana of Axum proclaimed Christianity as the state religion, leading to the construction of monasteries and churches throughout the empire. An inscription from this period references an insurrection led by a rebel named Pinchas, whose insurgents revolted and retreated to the Simien Mountains. It also details the surrender and the raising of taxes from Simien to Aksum. During the Amoraim period the Babylonian Talmud mentions a man named Yehuda from Aksum in Hanuda (then Abyssinia) came to Jerusalem to visit the Temple. The collapse of the Himyar Kingdom occurred following a pogrom led by Dhu Nuwas, which targeted the Aksumites and local Christian communities in Najran. In response, Kaleb of Axum launched a military campaign, with the support of the Byzantine Empire, Justin II. This intervention led to the death of Dhu Nuwas and the subsequent Aksumite annexation of Himyar. According to the account of the Greek traveler Cosmas Indicopleustes from Alexandria, a significant portion of Jews were either forcibly converted to Christianity or exiled to the barren mountain known as Semiani, which refers to the Simien Mountains. Aksumite inscriptions from the 6th century also record another outbreak of tensions between Jewish and Christian communities in Aksum. In the 9th century, a traveler known as Eldad ha-Dani, often referred to as "The Danite," arrived in the city of Kairouan. Eldad claimed to come from an independent Jewish kingdom located in the mountains, south of Kush (modern-day Ethiopia). Eldad brought Halachic books with him, today scholars find it almost completely resembles the Halacha of the Beta Israel. Eldad asserted that his community did not adhere to the Talmud, but instead preserved a form of Judaism only adhering to the Written Torah. Upon his arrival, Eldad's claims were met with skepticism by the local Jewish community in Kairouan. In response, the Jewish leaders sent a letter to Tzemach Gaon the Geon of Sura, a prominent halachic authority of the time, seeking clarification regarding Eldad's identity and religious practices. Tzemach Gaon affirmed that Eldad was indeed Jewish and said he belonged to the Tribe of Dan who settled in South of Cush, 135 years before the destruction of the First Temple. Eldad's works would be one of the first works bringing knowledge to the diaspora about the customs of the Beta Israel, his testimonies was also accepted and cited by the many prominent Geonim. His writings also gave to the rise of the legend Prester John. In the mid 10th century, a revolt sparked, lead by Gudit, who lead a revolt and sparked chaos across the Kingdom. She was a princess of Gideon IV, the King of the Simien. After her father was killed in battle with the Aksumite Empire, Gudit inherited his throne. Eventually, she defeated the Aksumite. Though her origins are murky, Ethiopian and Arab sources depict her as a Jewess zealot. She's credited for "laying waste" onto Aksum in addition the burning of Debre Damo monastery, in which after the sacking her name was changed from Esther to Gudit. For the burning and destruction of Axum, regarded as a holy city, the Kebra Nagast uses it a pretext as grounds to persecute the Beta Israel. She's also credited for her assassinations and the fall of the Aksumite dynasty. During this time, Arab writer Ibn Hawqal writes: "The land of Habash has been ruled by a woman for many years, and she killed the king of Abyssinia, and ruled by a people called the Al-Dani [Danites]. To this day, she rules an independent kingdom, and her kingdom borders Abyssinia to the south." Additionally, Gudit established trade relations with the Arab caliphates, a departure from the policies of previous Aksumite rulers, who had refrained from doing so in the past. As part of these diplomatic efforts, she sent a zebra to the King of Yemen, who in turn, presented it as a gift to the Abbasid Caliph, Al-Qadir. who mentions he received the gift from a Jewess, who had overthrown the King of Abyssinia. Following her death, Gudit was eventually supplanted by the Zagwe dynasty. There are numerous accounts and traditions concerning the arrival of Jews in Ethiopia. According to the main account of origin, the community descends from an ancient migration of the Tribe of Dan, when the Brothers war occurred, Jeroboam II, demanded that the Tribe of Dan was forced to fight to the Tribe of Judah, in which they refused and migrated to Ethiopia via Egypt. According to The Geon, Tzemach Gaon, the tribe of Dan migrated to Ethiopia 135 years before the destruction of the Solomon's Temple. Axumite legends suggests that the Beta Israel are descended from emissaries sent by King Solomon to the Queen of Sheba, rather than from contemporaneous figures. However, the Beta Israel themselves reject the national history. Menachem Waldeman, a prominent researcher of the Beta Israel, identifies four origin stories among them, all of which trace their ancestry to the Tribe of Dan which migrated through Egypt. The Hebrew Bible makes several references to the presence of Jews in the Land of Kush [Modern day Ethiopia and Sudan], and includes prophetic passages regarding their settlement in the area after the dispersion of the Israelites, and their redemption in the regathering. These references appear notably in the prophetic texts of the Book of Isaiah and Book of Zephaniah. For example, in the Book of Isaiah; Chapter 11 Verse 11: And it shall come to pass in that day, that the Lord shall set his hand again the second time to recover the remnant of his people, which shall be left, from Assyria, and from Egypt, and from Pathros, and from Cush, and from Elam, and from Shinar, and from Hamath, and from the islands of the sea." In 722 BC, the Assyrian king Sargon II conquered the city of Samaria, the capital of the Kingdom of Israel (Samaria). Most of the kingdom's inhabitants were exiled to far-flung provinces in the Assyrian empire. Isaiah also prophesies of Jews who lived across the rivers of Kush meaning across the Blue Nile, in (Isaiah 1–7): O earth, the sound of wings, which passeth by, unto the River Kush [modern-day Ethiopia]. He sendeth messengers in the sea, and in vessels of gold upon the face of the waters, swift angels to a nation broken and dismayed, to a people terrible of him and of his people, a nation broken and dismayed, whose land the rivers despised. [...]. They will be left together for the vultures of the mountains, and for the beasts of the earth; And the end of the ravenous beast shall be upon him, and all the beasts of the earth shall be smitten with him. In that day shall I bring a people to the Lord of hosts, a people broken and dismayed, and a people terrible, from whom shall I bring them; a nation broken and dismayed, whose land the rivers despised, unto the place of the name of the Lord of hosts, Mount Zion. The prophet Zephaniah prophezied after the destruction of the First temple, offerings of sacrifices will continue to be offered by Israelites across the River Kush (modern-day Ethiopia) in Book of Zephaniah, chapter 3, verse 10: "From across the River Cush, my servants, daughters of my dispersed ones, will bring me my offering." It seems that during the time of Isaiah and Zephaniah in the 8th and 7th centuries BCE, a Jewish community existed in these areas. The Jewish presence in the region of Abyssinia is notably documented by the 12th-century traveller and historian Benjamin of Tudela writing circa 1150, observed the Jews in the area and makes a clear mention of the community: “There are many Jews here. They are not subject to the Gentiles. They have cities and forts on the mountaintops, from where they raid the Christian Ethiopian Empire [...] they take spoils and make refuge in their mountains; no one prevails against them." Additionally, the Venitian explorer Marco Polo also references the existence of a large Jewish community in the region of "Abash," which coincides with modern-day Abyssina [Habash]. In 1270, Yekuno Amlak overthrew the Zagwe dynasty and claimed descent from the Aksumite nobility that had survived the massacre orchestrated by the 10th-century ruler, Gudit. He established the Solomonic dynasty, which would govern Ethiopia for the next eight centuries. On August 10, 1270, The Emperor stated: "I will destroy the Jews," a pronouncement aimed at consolidating support from the Christian population who had been embroiled in constant warfare for decades. At the time, many viewed the Beta Israel community as "squatters" or "immigrants." At the same time, an Ethiopian Christian monk, Zena Marqos, nephew of the King Yekuno Amlak (1270–1285), wrote an account of the religion and history of the Beta Israel. His informant, a converted Jew to Christianity, says that they arrived during the reign of King Solomon. In 1329, while Emperor Amda Seyon I was engaged in military campaigns against the Muslims of the Sultanate of Ifat, intelligence reports from his spies began to reach him. These reports indicated that the Jews had started to burn churches and conduct sneak attacks on imperial soldiers who were occupying their homeland and trying to convert the population, specifically in the regions of Semien, Wagara, Tselemti, and Szeged. In response to this growing threat, the Emperor called upon Tsaga Christos, the governor of Begemder, to quell the unrest. Tsaga Christos was tasked with leading his own forces, bolstered by reinforcements from neighboring provinces, to neutralize the Jewish insurgents. The Emperor's primary objective was to prevent the Jewish factions from invading the Christian kingdom's territories and potentially overthrowing the Christian rule, as had occurred in earlier times. Simultaneously, the Emperor's forces continued their engagement with the Muslim forces in the Dera. The rebellion was ultimately subdued, and historical sources report that "The Jews hid, but when the Christian forces arrived, they leaped into the abyss." Following the successful suppression of the uprising, the commander of the imperial army sent an emissary to the Emperor to inform him that Tsaga Christos had triumphed over the Jewish fighters, defeated them, and seized considerable plunder. The remaining Jewish forces retreated into the Simien Mountains. Under Emperor Yeshaq I reign, tensions between the Jewish community and Ethiopia escalated. The emperor sought to extend his influence across the Empire and proposed that the leaders of Beta Israel become key members of the Ethiopian Empire. However, this offer, along with a subsequent tax increase, was met with resistance from the Jewish population, who rallied under their leader, Agor, to oppose the emperor's forces. The ensuing conflict took place in the Wagra region, where the emperor's superior military strength ultimately drove the Jewish army southward to the Simien Mountains. Following his conquest of a fertile area, the emperor proclaimed his victory. The Emperor later encouraged missionary activity in the area, establishing churches and baptizing many Jews who lived in the area under occupation into Christianity. The resistance of the local Jews led the Emperor to decree: "If he is baptized into the Christian religion, he will inherit his ancestral land, and he will not be Falasha". The term Falasha is derived from the root F-L-S/S in Geʽez, which translates to "invaders." In the context of the Beta Israel community, however, it encompasses additional meanings such as "foreigners," "squatters," and "immigrants." This broader interpretation reflects to the identity and historical experiences of the community as it is considered derogatory. In 1445, Emperor Zara Yaqob married Seyon Moges daughter of Gideon V. The couple had three sons. In 1462, she was accused of espionage on behalf of the Jewish community during a rebellion launched while the Emperor was engaged in a campaign against the Adal Sultanate. The revolt was quickly suppressed, his wife was executed, and many Jewish leaders were imprisoned. Following the victory, Zara Yaqob adopted the title “Exterminator of the Jews.” Toward the end of Zara Yaqob's reign, the legitimacy of his new heir, Baeda Maryam I, was questioned due to his mother's Jewish background and rumors of her continued adherence to Judaism. After her death, Baeda Maryam secretly arranged her burial at Makdas Maryam Church in Debre Birhan, allegedly bribing clergy to conceal the act. Upon discovering this, the Emperor ordered his son beaten, but later reconciled with him following pressure from the Ethiopian Orthodox Church. Baeda Maryam was crowned emperor eight years after his mother's death and Zaga Amlak's escape. Accusations regarding his possible Jewish sympathies may have contributed to his decision to wage a prolonged military campaign, lasting seven years, against the Jewish population of Begemder province, led by General Markos. General Markos, was a General under Emperor Baeda Maryam I. He was sent to quell the Jews, during his time, each Jewish settlement was responsible for a local governor who reported directly to him. Markos's actions included the demolition of synagogues and the establishment of churches in their place, the baptism of local Jews under threat, and the murder of people who refused to convert to Christianity. The fighting against the Jews continued uninterrupted because the emperor signed a peace treaty in 1465 with Sultan Badlay ibn Sa'ad ad-Din, which gave Markos extensive freedom of action, as well as soldiers who were transferred from the borders to fight the Jews. Markos carried out a brutal inquisition, and when the leaders of the Jewish communities arrived he gave the order to behead them in revenge for the crucifixion of Jesus. The emperor's records state: And Markos fought against them and encamped for seven years at the foot of their hill, and then he defeated them with great effort and cunning, and delivered them into his hand, and conquered all their cities. And after that a counsel came to his heart, saying: How shall I forgive these accursed ones... It is better to destroy them than to let them live... He ordered the herald to proclaim: Every Falasha shall come to whom I have commanded, and whoever does not come, his house shall be destroyed and his property shall be forfeited. And after that all the Falashas gathered around Judge Marcus... Then he ordered his men to cut off their necks with the sword, and they consumed them, until their blood was shed, and their corpses filled the field. And the word of mockery which their fathers spoke on the day of the crucifixion of our Lord was fulfilled upon them, saying: Their blood be on us, and on our children. In this period, most of the religious relics were either burnt or looted, included numerous sacred book. In addition, the Jews were unable to practice their religious traditions for 80 years and were exiled into foreign lands, and their land was transferred to the church and nobility. Any form of practicing Judaism was illegal at this time and punishable by death. The Jews were made to live as tenants under Governors who kept them under their supervision, due to this time of much persecution some of their religious laws and customs were lost forever which included the incorporation of some Christian elements. In order to avoid assimilation they made a religious law "Atancon" which made them avoid contact with non-Jews, and those who made contact with non-Jews had to immerse themselves before re-entering the community. During this time, Rabbi Eliyahu of Ferrara, a highly respected Torah scholar from Italy, made Aliyah to the Land of Israel, passing through Egypt along the way. After a tortuous journey, he reached Jerusalem, on May 1, 1437. He makes pilgrimage and verifies the rumours that had reached Italy about the Tribe of Dan overcoming their enemies. In his letter he describes the Jews in Abyssinia: I think I have already told you what a young Jewish man told me about the residents of his area, who are their own masters and not subservient to others. They are surrounded by a great nation called Hubash... They have the Torah and an oral commentary, but they do not have the Talmud or our poskim. I studied several of their mitzvot, and found that some follow our opinion while others follow the opinion of the Karaites. They have the Book of Esther but not Hanukkah. They are a distance of three months away from us, and the River Gozen runs through their land." During this time many Ethiopian Jews appeared in Egypt and the Mediterranean, and was cashed in by the local Jewish communities and integrated, joined synagogues and communities. Between 1485 and 1488, Rabbi Obadiah ben Abraham of Bertinoro, meets two Jewish prisoners from Ethiopia who were sold into slavery and cashed by the Jews of Egypt, and integrated into their synagogues. The Rabbi, describes in a letter the Beta Israel were "mighty children of Israel" he writes in his letter that the Jewish kingdom lies within the borders of the Ethiopian Empire and the Christians managed to subdue the Jews after centuries of bloody battles: Indeed, what I have learned and know without a doubt is that on one of the borders of the kingdom of the Prester John (Abyssinia)... there the Israelites are certainly encamped, and they have five presidents or kings, and it is said that for hundreds of years they fought with the Prester John, great and mighty wars, and finally, in the summer, the hand of the Prester John prevailed over them and struck them with a great and mighty blow, and entered their lands and destroyed and exiled them, and almost all memory of Israel was lost from those places. And those who remained in them he decreed strange decrees to violate their religion, similar to the decrees that the Greek kingdom decreed in the days of the Hasmoneans... and their enemies returned to them a few souls of men and women and a great number of people from distant nations, and brought some of them to Egypt and the Jews redeemed them in Egypt. And I saw two of them in Egypt and they were dark, but not black like the Kushites, and I could not distinguish between them whether they kept the law of the Karaites or the law of the rabbis, because in some things they seemed to keep the law of the Karaites, in what "They said that there would be no fire in their houses on the Sabbath, and in other things they appear to be observing the teachings of the rabbis, and they say that they are referring to the Tribe of Dan... This is what I saw with my own eyes and heard with my own ears, even though the two men did not know the Holy language except a tiny bit, and their Arabic language is barely understood by the people of the land." Another letter from the rabbi after receiving letters from the Jews of Yemen about the difficult situation of Ethiopian Jews after the Markos riots that took place between 1465 and 1472: "... And we fear lest, God forbid, the rumor be true, for the voice is growing very strong. May God protect His children and His servants, Amen." Rabbi David ben Solomon ibn Abi Zimra (RaDBaZ), a prominent 16th-century a Sephardic rabbinic authority and the Chief Rabbi of Egyptian Jewry, addressed the status of the Ethiopian Jews due to doubts whether they were Karaites or Rabbinites in his responsa. He ruled that the Jews of Ethiopia are not Karaites and empathetically ruled they're un-doubtfully Jewish and are descendants of the Tribe of Dan, who had settled in Abyssinia. This position arises within the context of Halachic discussions, leading to the call of Redemption of captives. Retrospectively Rabbi Yaakov de Castro, also known as the Morikash for his commentary on the Shulchan Aruch followed his teachers ruling. The issuing of this halakhic ruling by the Radbaz was later used for the advent of Operation Moses in which Rabbi Ovadia Yosef brought to Israeli government requesting to hasten action to bring the community to Israel. The Kabbalist Rabbi Abraham ben Eliezer Halevi the Elder, one of the exiles from Spain, immigrated to the Land of Israel and lived in Jerusalem, and wrote about the Jewish kingdom in Ethiopia. Halevi, who often dealt with calculations of redemption and the coming of the Messiah, thought that redemption was near and that the war of Ethiopian Jews against the Christian empire was the precursor of its arrival. in his epistle he refers to the Kingdom of Simien with their locality "Gideons" and writes that in the land of Habash there was "The harsh Kingdom of the mighty Jews". During this time, a Portuguese Kabbalist Solomon Molcho wrote an epistle regarding the Ethiopian community and believed they will be the ones who realise the coming of the Messiah. During this period, numerous epistles were composed by renowned authors of the time, such as Moses ben Mordecai Bassola, Isaac Akrish, and Abraham Farissol, among others. Additionally, various writings emerged concerning the Beta Israel and their interactions with the Jewish communities in the Mediterranean. Many Ethiopian Jews at this time would be cashed in by the local communities, joined synagogues, intermarried, etc. The legends of the conflicts involving the Beta Israel and the Ethiopian Empire gained fame and spread across the world. During the mid-16th century, the Beta Israel was heavily persecuted, by the ruling elites in which the Emperor confined himself on the agriculturally richer part of their territory, as leading as well leading small scale raids) into their villages and forts, which devastated the community, with a gradual increase of forced conversions and burning of manuscripts and scrolls occurred during this time. The Emperor also looted and burnt all the possessions, in which we can see an increase of Christian influence in this time. During this time the Adal Sultanate, launched a holy war against the Ethiopian Empire, in which this major conflict caused the decline of the Ethiopian Empire, which the Beta Israel saw as divine intervention as retribution for the persecution their suffered under the yoke of the Ethiopian Empire for centuries. During this time they were also drawn into the conflict. During this time the Adal Sultanate concilidated themselves across the Ethiopian Empire, in which they reach the Beta Israel heartland, the leaders of the community offer their loyalty to the Sultan, an account of the conversation is recorded during this time by an eyewitness, Shihab al-Dīn Aḥmad ibn ʿAbd al-Qādir ibn Sālim ibn ʿUthmān, in which he writes: Those Jews that dwelled in the land of Habash (once) controlled the district of Semien. They are called, Falasa, because they chant the praise of the One God and have faith in none other. They have no prophet and no saint. The people of Bahr Amba have subjugated them for the past forty years and employed them to work the land for them. They tilled the fields for them. After the Ahmad ibn Ibrahim al-Ghazi had won the victory over the patrician Sa'ul, all the Falasa came from deep valleys and even from mountain caves – because they did not dwell in the lowlands, but in the mountains and in caves. They said to the imam "For forty years there has been hatred between us and the people of Bahr Amba. Let us kill them now, those who are left. And let us occupy their strongholds now that you have conquered them. We will be sufficient to do this to them. So, remain in your camp, and what we will do to them will astonish you." According to the writer, the Jews were successful in their campaign and in return the Sultan granted them exemption from the Jizya tax and allowed them to retake their former lands and worship freely. After the death of Dawit II, the Beta Israel then switched their alliance and began supporting Emperor Gelawdewos against the Muslim invaders. The Christian Ethiopians did succeed eventually in defeating the Muslims and preventing Ahmad ibn Ibrahim from conquering Ethiopia. However, after the war ended, the Ottoman Empire invaded, in a last minute ditch the Beta Israel switched their alliance with the Emperor and started supporting the Ottomans. The geographer, Abraham Farissol details the help of the Beta Israel to the Ottomans during the war. During the Ottoman offensive, the Mamluk-Ottoman General, Özdemir Pasha corresponded with the Ottoman sultan on the war, in the letter Ozdemir states that the Ottoman force survived due to a Jewish prince sending 12,000 horsemen to their aid in the letter he wrote to Sultan Suleiman II: "If it wasn't for the Jewish prince who sent those 12,000 Jewish horsemen, I would have risked losing all my men." After the death of Gelawdeos, Emperor Sarsa Dengel, succeeded him and proved to be more oppressive to the Jews than his predecessors. The Emperor launched a devastating campaign against the Beta Israel with the use of cannons that he recently captured from the Ottomans. While on this campaign, Sarsa Dengel learned that the Borana Oromo's had invaded the provinces of Shewa, Wej, and Damot. Despite this, Sarsa Dengel refused to defend these territories against the Oromos and instead continued to focus his attention on the Beta Israel. This decision generated considerable frustration among his officials and also had devastating affects for the future, but the Emperor justified his action by stating: "It is better for me that I fight with the enemies of the blood of Jesus Christ[i.e. Jews] than go to fight against the Galla." The Ethiopian forces continued to pacify the Jews, culminating in the capture and execution of the Jewish leader, Gershom. Following his death, many of the Beta Israel committed mass suicide to avoid forced conversions. In 1614, the Jews of Seimen rebelled against the Emperor Susenyos I. By 1624, the revolt had been quelled, and the conclusive defeat of a subsequent uprising the following year marked the end of the political and military autonomy of the Beta Israel. After the Beta Israel autonomy in Ethiopia ended in the 1620s, Emperor Susenyos I confiscated their lands and forcibly baptized them. Ordering his men to "erase the memory of the Jews", which included mass deaths, looting and burning of their belongings and holy books In addition, the practice of Judaism or adherence was forbidden in Ethiopia. As a result of this period of oppression, much traditional Jewish culture and practice was lost or changed, including the loss of Hebrew. As a success of the Portuguese Jesuit order in converting the Emperor Susenyos, the Portuguese Inquisitionwas brought to Ethiopia, in which thousands of Jews perished. A Portuguese diplomat and traveller, Manuel de Almeida talks about them extensively in his History of Upper Abassia. The destruction of the institutional foundations of the Jews in the north of Ethiopia had without doubt resulted in the destruction of their books and archives, thus erasing the memory of their history and their origins. During this time a Rabbi named Shlomo from Vienna appeared in Ethiopia, bringing with him many books with the objective to help the community with religious issues. During this time he was with the community whilst teaching, the Jesuits caused a religious debate in front of the Emperor, over the Bible's interpretation of the prophecies allegedly telling of the birth of Jesus. One of the debates was held in the village of Dankaz, where the emperor Susenyos I lived, and the rabbi won the argument in front of a large crowd, many were drawn to his words and his influence on the community was great. In view of the influence of the Rabbi, the Jesuits ordered his books to be burnt, and urged Susenyos to execute him, however his fate is unknown. After almost 30 years of the Emperor's reign, a Jew named Abraham who is identified as Shlomo by Wolf Leslau, headed over the Beta Israel for Susenyos, pleaded the Emperor for his people to worship freely, Abraham's request appears in an anonymous chronicle: An Israelite named Abraham appeared at the Emperor's court and the Emperor liked him. He said to him: "What do you want me to get you? "He replied," If I like my lord, I beg you to let my people return to their faith." The emperor accepted his request and replied, "Everything will be as you have requested," and the emperor declared, "I give into your hand all the Israelites in my land. All those who survived assimilation shall be at peace, and the remains of the exiles returned to their seats and worshiped the religion of their ancestors". Although his request was heeded, and many returned to their ancestral lands, the Portuguese still insisted in forced conversions which were still going on during this time, not only for the Beta Israel but for the whole country at the time, due to the Jesuits influence over the Emperor. For example, Manuel de Almeida describes the situation of the Beta Israel at the time: There were Jews in Ethiopia from the first. Some of them were converted to the law of Christ Our Lord; others persisted in their blindness and formerly possessed many wide territories, almost the whole Kingdom of Dambea and the provinces of Ogara and Semien. This was when the [Christian] empire was much larger, but since the [pagan and Muslim] Gallas have been pressing in upon them [from the east and south], the Emperors have pressed in upon them [i.e., the Jews to the west?] much more and took Dembea and Ogara from them by force of arms many years ago. In Semien, however, they defended themselves with great determination, helped by the position and the ruggedness of their mountains. Many rebels ran away and joined them till the present Emperor Setan Sequed [throne name of Susneyos], who in his 9th year fought and conquered the King Gideon and in his 19th year attacked Semien and killed Gideon. ... The majority and the flower of them were killed in various attacks and the remainder surrendered or dispersed in different directions. Many of them received holy baptism, but nearly all were still as much Jews as they had been before. There are many of the latter in Dambea and in various regions; they live by weaving cloth and by making zargunchos (spears), ploughs and other iron articles, for they are great smiths. Between the Emperor's kingdoms and the Cafres [Negroes] who live next to the Nile outside imperial territory, mingled together with each other are many more of these Jews who are called Falashas here. The Falashas or Jews are ... of [Arabic] race [and speak] Hebrew, though it is very corrupt. They have their Hebrew Bibles and sing the psalms in their synagogues. The situation of the Beta Israel seems to improve after the ascension of Fasilides, in which the Jesuits were expelled, and tensions smoothed for the community but not completely peaceful, the Emperor confined a batch of Jews near his palace who served as Craftsmen, Silversmiths, Artisans, and other positions. The situation was at ease for the community up until the time of the Zemene Mesafint, in which a period of decentralization erupted, and increased the isolation of the community for a century. According to the early 19th century missionary Samuel Gobat, the Christians considered the Jews "Bouda" (a term used in Ethiopia, to denote Ethiopian Jews as beasts), or sorcerers, and they often fell victim to marauding warlords, as Gobat reported "Their cattle are often taken from them. They carry no arms, either for attack or defense." During this period, many members of the community were killed recklessly by the warlords, and the Jewish religion was effectively lost for some years also being unable to do the ritual Red heifer sacrifice. Scottish traveller, James Bruce notes during this time that almost all their scribes had been killed, and witness that many bought Christian bibles and reconstructed it. Effectively during this time Hebrew was almost completely gone and was replaced with Ge'ez. In the 1840s by the preeminent Kahen of Qwara Abba Widdayeh restored the religion. During the mid 19th century, the Zemene Mesafint ended, in which peace and stability was brought back to the region. By the end of the Zemene Mesafint, a Monk named Abba Mahri (Mahri ben Suthal), solved the dream of a poor Christian woman who turned to him for help, predicting that she would give birth to a strong son who would bear the name "Kassa" in who which went on to become Emperor Tewodros II, who restored the Ethiopian Empire. In which during this time, many of the Emperor's advisers tried to convince the Emperor to exterminate the Beta Israel, once and for all, but the Emperor due to his great affliction for the community denied profusely. During this time, Abba Mahri proclaimed the Gathering of Israel was approaching, and that it was time for the community to stop yearning to return and immigrate to the holy land by foot in which he was given special permission by the Emperor. (60 years before Herzl's Zionist plan), he was joined by many who considered him a Tzadik. Several Christian missionaries including Johann Martin Flood who wrote about him: Once, I was introduced to a monk, Father Mahri, who was completely convinced that the time was near when God would gather the Jews of all nations into the land of their fathers. He believed that they would then build their temple in Jerusalem, as described in Ezekiel's prophecy, and worship God according to the Law of Moses. The historians see Abba Mahri as the sole or main initiator of the aliyah attempt in 1862, and took practical steps to organize it; he sent emissaries to all the communities of Beta Israel, gathered the names of the ones to make the journey and led their departure. And intended to lead those who left for the journey to the Red Sea, to cross it and from there to make the rest of the road leading to Jerusalem. At some point, Abba Mahri recognized the failure of the attempt, realized that the prophecy was for the future, and returned to Dembiya. Many of the survivors made their way back with Aba Mahri; a few, with Aba Mhri's permission, tried to continue their journey to Jerusalem. Some traditions hold that some of the immigrants came together with Mahri from the mountains to the Red Sea, and that when they reached the river Abba Mahri raised his staff, as Moses did in the Exodus from Egypt, in the hope that the stormy waters would open up to the crossing, and only when the sea did not open many jumped in the water, which claimed many lives after this they recognized the failure of the journey and make their way back to Ethiopia and settling in newly formed communities in Tigray and Eritrea. The realising of the failure of the catastrophic event led to increase pressure from the new founded missionaries who tried to convert them. After the death of Tekle Giyorgis II, another prince Yohannes IV took the throne, and proved to be more zealous and harsher to the Beta Israel than his predecessors, in which he lifted the ban of foreign missionaries in Ethiopia, unless they direct their mission towards the Beta Israel, this is because he believed that Beta Israel were foreigners (like most of his predecessors) therefore did not mind if they were targeted by missionaries. this is proved as he only preferred the local priests to convert the Muslims of Wollo instead of Missionaries. During this time, the Emperor placed the Jews under Tekle Haymanot of Gojjam in which they had to either pay tax in Maria Theresa thaler or Salt for their protection. During this time, Missionaries have a strong presence in community, especially via Messianic Jews, in which many tell fallacies and offer commodities to pressure members of the community into converting, taking advantage of the poor situation of the community, in addition to new threats made by Yohannes IV if they did not convert, many of whom that converted are known today as the Falash Mura. News of this reaches many Rabbis and Jewish leaders in Europe, whom many sympathized for the community send aid and relief. Following a call by Rabbi Azriel Hildesheimer regarding Ethiopian Jews, the Hakham Bashi of the Ottoman Empire, Rabbi Yakir Gueron issued a letter in 1866 entitled "Call to Help the Israelites in the Land of Kush" in which he stressed that there is "no difference between Ethiopian Jews and the rest of the Jews" in the postcards, warning of the danger of missionaries who threatened the witness and called for action to save her. Leading rabbis in Europe make a public call to use all possible ways to save the Ethiopian Jews. Also the Karaites of Saint Petersburg hears about this appeal, and sends an interesting letter to community, titled "Letter to the Falashas" in support and asks questions to the community, following the lack of effort by the Alliance to help many rabbis take action.[citation needed] Rabbi Azriel Hildesheimer was one of the first to take practical action, during his tenure as rabbi of the Austro-Hungarian community of Eisenstadt. In his letter to Rabbi Solomon Judah Loeb Rapoport of Prague, dated Sheshvan Purim 5624/1864, he states that “the desire to do good for my people burns within me. My heart says that I must take this holy burden upon my shoulders, formulate my intentions, implement this plan, reflect, and make a beginning, and that I must lead this great event.” in which he replies that he supports this plan and calls it a mitzvah mission: "Whoever saves one Jewish soul, it is as if he saved an entire world.” Six months later, on 11 Tishrei 5625/1865, Hildesheimer makes a “public appeal to all our fellow Jews,” which is published in the major Jewish publications. Rabbi Yaakov Dovid Wilovsky of Safed, ruled the Beta Istael are "Jews in the full sense of the word" and supported the mission made an appeal in Constantinople, and gathered funds and donations to help aid the situation of the Ethiopian Jews, to help protect them from the pressures of its perescutors and missionaries. During this time many Rabbis write in support of the mission including Zvi Hirsch Kalischer writes in support of this; "As often as I speak of him, I do earnestly remember” (Jeremiah 31:19) that he has done very well on the issue of the Falashas, to arouse the spirit of our fellow Jews, to teach the wayward knowledge of how they should worship the Lord, our God, according to the holy Written Torah and the Oral Torah, and to preserve them from straying into the ways of the inciters, the missionaries, God forbid. This is certainly a great mitzvah. We must act on behalf of these souls, to grant knowledge to the ignorant among the people in distant lands, to bring them under the shadow of the Divine, and to prevent the hunters from snaring their human prey." Further in his letter, Rabbi Kalischer makes practical proposals for action. In Germany in 1864, Rabbi Marcus (Meyer) Lehmann's Yiddish publication Der Israelit writes prominent rabbis had nominated themselves as candidates for the central committee for rescue operations – the nominees include Rabbi Nathan Marcus Adler, Chief Rabbi of the British Empire; Rabbi Salomon Ulmann, chief rabbi of France; Rabbi Jacob Ettlinger; Rabbi Seligman Baer Bamberger and Rabbi Samson Raphael Hirsch of Frankfurt. The Malbim, while in Paris, writes a letter to the heads of the Alliance Israélite Universelle, recommending that Professor Yosef Halevi, lead a rescue mission, calling it “a mitzvah mission.” In his words to one of the organization's directors, he says that in his opinion, the Falashas are undoubtable Jews and that the Diaspora Jews should work to rescue them. In the mid 19th century, The Lithuanian Jewish traveller Jacob Saphir travels to Jerusalem, where he meets 2 Ethiopian Jews. Later in his travels to Yemen, he hears numerous stories about the Beta Israel from the Jews there and returns with many stories which brings attention to Europeans Jews of the situation of the community and many scholars and travellers go to visit the community. Among these travellers is Joseph Halévy, a French orientalist who met with the Beta Israel in which he presents himself as a "white Falasha" but the community at first refuses to believe him due to the Missionary activity at the time, he also writes that they believed they were the last Jews left at the time, indicating that they had been in exile and isolated from world Jewry for a very long time. In his writings, he describes his interacting with the community: You should know, dear brothers, that I am also a Falashi, I am one of you! I believe in no other God but the one God Himself, and my religion is none other but the heritage of the Jewish people from Mount Sinai. . . . Finally the masses called out together, “You are a Falashi! A Falashi with white skin! You are mocking us! Who has ever heard or seen such a thing? Are there white Falashas under the sun?” I tried to tell them and to pledge on my faith that all the Falashas in Jerusalem and in the other countries of the world were white, and that their skin was no different from that of the other peoples among whom they lived. My mention of the word “Jerusalem,” which I uttered coincidentally, immediately annulled any doubt the Falashas may have had regarding my words. Like lightning in the dark of night, the word “Jerusalem” lit up the eyes and hearts of my lost brothers. With eyes full of tears, they cried, “Ah, have you also visited Jerusalem the holy, blessed city? Have you seen the beautiful Mount Zion with your own eyes, and our magnificently built Temple, the admired and exalted palace in which the God of Israel loves to dwell in honor within? Ah, have you perhaps seen with your own eyes the grave of our foremother Rachel? Have you been in Bethlehem and in the city of Hebron, where our holy patriachs are buried? After returning to Paris, Halevy writes a positive report and requests that the Alliance Israélite Universelleimmediately transfer 8,000 Jews to Ottoman Syria. however the organization refuse to comply with the recommendations of Halévy. After these contacts, the Beta Israel to fall more or less into oblivion. Between 1888 and 1892, North Ethiopia experienced a series of disasters: devastating famines, invasion of the Sudanese Dervishes of the Mahdi, and the African rinderpest epizootic epidemic. The death toll was very high. "Mothers cooked and ate their own children. Horrible things are done, which are unspeakable". The Beta Israel, during this time were a very poor, marginalized minority group, and were particularly affected, as well as their monasteries. It is estimated that between half and two-thirds of the community disappear during this period when world Judaism forgets them, which the Beta Israel remember's as Kefu-qän, the "Bad Days". At the end of the 19th century the Ansar movement, a Sudanese Dervish movement, attacked British occupied Egypt, culminating in the Mahdist War. During this time, the Mahdists demanded the conversion of the Emperor Yohannes IV, and the governor of Begemder, Tekle Haymanot of Gojjam, and of their subjects to Islam. In early 1888, the military commander Osman Digna and Abdallah ibn Muhammad, launched a jihad into Begemder, and conducted a violent rampage, burning monasteries, villages, and holy scriptures. During this time, it is reported that 2/3 of the Beta Israel were killed, including priests, monks, and Hachems; almost 200 of their synagogues were also destroyed. The High Priest's letter in 1905 recorded: Before there were many; before there were 200 synagogues, now there are only 30. During the Dervishes, a frightening number of people died. We are in great suffering. Our books were destroyed; the Dervishes set them on fire. We no longer have schools; they have been destroyed.” A the community was hugely hit during the 1890s when a famine occurred which killed 90% of all cattle from missionaries; "Before they were fighting over ritual purity of the cattle, and now they fought over the skin". In 1893, the recovery began and food prices began to fall. Some refugees began to return to their homes, and others fled for fear of poverty. Many areas did not thrive as before. Various testimonies from 1894 indicate that some areas of Begemder had not recovered from the blow, even after a decade: When I visited the place many years ago, many Falashas lived there, but now all of them have disappeared and we are told they died during the famine.” During this time significant portions of the communities of Dembiya, Semien, Tselemti, declined in numbers, from a few thousand to only a few hundred. A missionary of the Church's ministry among Jewish People, Michael Erguei who toured the community in 1899 wrote: ”Sakalat is a very fertile land. Before the Dervish invasion, it was extremely rich, like Demba, but now it is poor and overpopulated. Thousands of Jews once lived in Sakalat, but today their numbers do not exceed a few hundred.” The impact of the Dembiya disasters was among the worst: "Alas! Over the past three years, the beautiful country of Dembiya has become a complete wasteland where only a few inhabitants can be found here and there. Crops and cattle crops, which once flourished across the country, can no longer be found here; hyenas, leopards and lions are in fact the only inhabitants of the land that stretches between Parka and Dingle.” During this period, the congregation could not maintain all religious practices, such as Kosher food and drink or Shabbat observance, much less sacrifice and survival dictated their path. Also, the number of missionaries from Europe increased; they offered services, such as hospitals, education, food. Many people resisted, but those who joined were regarded as the "Falash Mura". This led to some members of the congregation becoming Christians. However, Mikael Aragawi points out that these converts were not true Christians and did not know Christianity at all. Some members of the community returned to their communities after the famine, and after a brief period of isolation and purification became part of it again. However, those who left and did not revert, were excommunicated, and often were confined closely outside of the communities. Intermarriage between their future descendants and non-Jews was low, at 0.3%. In 1904, Jacques Faitlovitch, a Jewish scholar and former student of Joseph Halévy at the École des Hautes Études in Paris, embarked on a mission to northern Ethiopia to engage with the Beta Israel community firmly believing their connection to the Jewish people, and worked to bring them closer to world Jewry. His expedition was funded by the Jewish philanthropist Edmond James de Rothschild and given the blessing by Chief Rabbi Zadoc Kahn. When he arrives in Ethiopia and first presents himself to the community, due to the Messianic Jewish activity of the time he notes, they refuse to accept he's Jewish and it took some months of convincing until he could enter their synagogues. When he meets them, the Falashas are marginalized and poor, victims of strong social ostracism and under religious pressure from the London Society for Promoting Christianity Among the Jews, which is trying to convert them. Faitlovitch's engagement with the Beta Israel marked the beginning of a long-term effort characterized by three primary objectives: Although these goals were rooted in a desire for inclusion and religious modernization, they were not without significant challenges. While the Beta Israel adhered to the Pentatuach and identified themselves as descendants of the Israelites, substantial theological and ritual differences existed between their practices and those of mainstream Rabbinic Judaism. Notably, the Beta Israel did not use the term "Jew" to describe themselves. Furthermore, during the 19th and early 20th centuries, racial differences (especially skin color) were often perceived as indicative of deeper cultural or religious distinctions. This further complicated their acceptance. In the early 20th century, Faitlovitch established an international committee to advocate for the Beta Israel. He raised awareness through the publication of his travel narrative "Notes de voyage chez les Falashas", and mobilized financial support to create educational institutions in Beta Israel villages, beginning in 1910. Faitlovitch also focused on cultivating a Western-educated Beta Israel elite. In 1905, he brought to Europe Taamrat Emmanuel, who would become a prominent leader of the Beta Israel in the first half of the 20th century. Educated in Western Jewish institutions, Emmanuel would later serve as an advisor to Emperor Haile Selassie during the 1940s and 1950s. This emerging elite played a crucial role in facilitating the integration of the Beta Israel into global Jewry. They redeveloped symbols such as the Star of David and Rabbinic holidays, and they fostered broader acceptance within the community of their Jewish identity. The experience of the Beta Israel garnered increasing support from Western Jewish organizations during the interwar period. Institutions such as the World Jewish Congress and the American Jewish Joint Distribution Committee began to advocate on their behalf, establishing multiple advocacies recognizing the Beta Israel as an integral and historic part of the Jewish people. In 1908, the chief rabbis of 45 countries made a joint statement officially declaring that Ethiopian Jews were indeed Jewish. And in 1921, a Public appeal of the Chief Rabbinate of Israel was sent to the Jewish Agency to save "50,000 holy-souls from extinction" and to bring them to the British Mandate of Palestine, signed by Abraham Isaac Kook and Jacob Meir. During the Italian occupation of Ethiopia, contact with world Jewry ceased. The local government showed a hostile attitude towards the Jews and many of them joined the Gideon Force and the Arbegnoch. The published race laws did not benefit the Jews and many were executed. In 1941, Benito Mussolini had made a plan to move the Jews to Ethiopia, in which he found a large Jewish population, as solving the Jewish problem, however this plan was never implemented. When the State of Israel was established in 1948, many Ethiopian Jews began contemplating immigrating to Israel. Nevertheless, the Emperor Haile Selassie refused to grant the Ethiopian Jewish population permission to leave his empire during this time 2,500 Jews were killed under Selassie's orders.[citation needed] When the State of Israel was established in 1948, the Israeli Chief Rabbinate did not follow its predecessors and refused to recognize the Beta Israel as Jews. The government, which had also rejected the Law of Return for the case of the Samaritans or Karaites, followed suit and denied them the right to immigrate. However, the Jewish Agency maintained Jewish schools in Ethiopia, and when they closed in 1958 for budgetary reasons, one of them remained open. American Jewish organizations, which had been assisting the Beta Israel since the end of World War II, also ceased most of their operations in the early 1960s. Paradoxically, despite this rejection, during the 1950s and 1960s the Beta Israel moved closer to Orthodox Judaism. Their belonging to world Judaism was now accepted and affirmed. Rabbinical holidays began to be observed. A new generation of Rabbis came out of Jewish schools and spread these practices. The first immigrants from Beta Israel in modern times came was in 1934 to the British Mandate, along with the immigrants of Yemenite Jews from Italian Eritrea. In late 1949, Yaakov Weinstein was sent by the Jewish Agency Aliyah Department to Ethiopia to examine the possibility of aliyah. Based on the assessment of the Jews of Addis Ababa, Weinstein wrote that "there is no urgency in the aliyah of the Ethiopian Jews and the immigration of Jews from Eastern Europe should be prioritized." After the Weinstein report, and after the end of the mass immigration, Israel opposed the immigration of members of the community and the recognition of their Jewishness, because it did not want to create "racial problems" since Israel at this time was a Socialist democracy, in addition to cultural differences that already existed in Israeli society, though most of the reasons they avoided it was for racist reasons. In the early 1950s Yitzhak Ben-Zvi contemplated putting the Beta Israel under the Law of Return, however Prime Minister David Ben-Gurion advisers opposed the immigration from Ethiopia, and later Prime Minister Moshe Sharett also opposed it. During this period, a number of families from Beta Israel came to Israel, as well as mixed Yemenite-Ethiopian families, who lived in Begemder province and Eritrea. The seizure of power by a pro-communist Derg military junta following the Ethiopian Revolution in 1974 strengthened the Jewishness of the Beta Israel. While the former prohibitions on land ownership were lifted, a vast redistribution of land was organized for the benefit of dispossessed peasants. However, the regime gradually adopted anti-religious and anti-Israeli positions that persecuted the Beta Israel. Above all, the country slipped towards civil war. Former feudal lords, united in the Ethiopian Democratic Union, launched an armed struggle, and in the process massacred peasants who had benefited from the land reform. Other militias were formed (the Ethiopian People's Revolutionary Party, the Tigray People's Liberation Front), and the entire north of the country was plunged into war. Famine became entrenched in the early 1980s. The situation for the people of the North became untenable. Hundreds of thousands of Ethiopians attempted to flee to neighbouring Sudan, including Beta Israel. Between 1948 and 1975, slightly less than 500 Ethiopian Jews emigrated to Israel (see statistics below). This early wave of migration primarily consisted of educated men who entered Israel on tourist visas, often under the pretense of Christian pilgrimage from Ethiopia to the Holy Land and subsequently remained in the country without legal status. Upon arrival, these individuals were identified and supported by advocates, rabbis, and politicians who recognized their status. These supporters eventually organized into associations led by, Norman Bentwich, Aryeh Tartakower, Graenum Berger, Rachel Yanait Ben-Zvi, Ruth Dayan, Aharon Cohen, Mordechai Ben-Porat, Amnon Linn and Ovadia Hazzi amongst many others. With the assistance of these advocates, some Ethiopian Jews were able to regularize their legal status in Israel. In certain cases, individuals agreed to undergo formal conversion to Judaism, thereby resolving their personal legal challenges, though this did little to address the broader legal and religious status of the Beta Israel community as a whole. Those who succeeded in regularizing their status often facilitated the immigration of their immediate family members. In 1973, Israel's Sephardi Chief Rabbi Ovadia Yosef, cited a rabbinic ruling of the Radbaz, Rabbi Azriel Hildesheimer, The Maharikash and two former Ashkenazic chief rabbis of Israel, Chief Rabbi Abraham Isaac Kook and Chief Rabbi Yitzhak HaLevi Herzog, declaring the Beta Israel Jews according to Halachah. He said: "It is our duty to redeem them from assimilation, to hasten their immigration to Israel, to educate them in the spirit of our holy Torah and to make them partners in the building of our sacred land....I am certain that the government institutions and the Jewish Agency, as well as organizations in Israel and the Diaspora, will help us to the best of our ability in this holy task..., the Mitzvah of redeeming the souls of our people...for everyone who saves one soul in Israel, it is as though he had saved the whole world. In 1974, Ashkenazi Rabbi Shlomo Goren also ruled that the Ethiopian Jews are a part of the Jewish people and this had already been established by Chief Rabbi Abraham Isaac Kook. From 1975 onwards, the majority of the Ethiopian Jews made aliyah under the 1950 Law of Return. Several undercover rescue missions were organized by activists and Mossad agents to get them out of Ethiopia. This laid the groundwork for larger-scale immigration efforts, which began in earnest under the government of Menachem Begin starting in 1977, following the Operation Moses and Operation Solomon. Immigration to Israel The emigration of the Beta Israel community to Israel was officially banned by the Communist Derg government of Ethiopia during the 1980s, although it is now known that General Mengistu collaborated with Israel in order to receive money and arms in exchange for granting the Beta Israel safe passage during Operation Moses. Population The largest obstacle facing the Israeli Ethiopian Beta Israel community is likely the low level of formal education of the majority of immigrants, who lacked the necessary skills for a developed economy like Israel. Because of the rural nature of Ethiopia, illiteracy was widespread, although young people were better educated. As a result, the incredibly abrupt transition from Ethiopian village life to Israel had a significant impact on their integration into Israeli society. Due to those challenges, the Israeli government created several programs to better the Ethiopian Jews in Israel's socio-economic status and to narrow and close educational gaps. One such program is the ministry of education's "New Way" program for integrating students of Ethiopian descent in the education system. Among the key achievements of the program was the increase in Bagrut eligibility rates among students of Ethiopian descent. In the 2023/4 school year, the percentage of 12th-grade students of Ethiopian descent taking matriculation exams was 93.4%, compared with 95.2% of the students in the Hebrew education system overall. In 2023/4, the rate of eligibility for Bagrut certificate among students of Ethiopian descent equaled the rate of eligibility among all Hebrew speakers for the first time, After a cumulative increase of 12.5% since 2017/18. In the 2022/3 school year, The rate of eligibility for Bagrut that meets the threshold requirements of the universities among 12th grade students of Ethiopian descent is 59.2% compared to 76% in the Hebrew education system overall (excluding ultra-Orthodox supervision) and 51.3% in the Arab education sector. the percentage of students of Ethiopian descent eligible for a matriculation certificate that meets university entrance requirements has also been increasing in recent years, and the gap between them and the general Hebrew-speaking students has narrowed from 27 percentage points in 2016 to 17 in 2022/3. this gap is still high in comparison to the gap in Bagrut eligibility. The number of students of Ethiopian origin studying at institutions of higher education has been increasing in recent years: from 3,194 in 2016/17 to 4,144 in 2023/24, an increase of 29.7%. In general, the student population increased by 3.9% – from 227,700 in 2016/17 to 235,500 in 2023/24. The percentage of women among Ethiopian students was higher than the corresponding percentage among the general Hebrew speaking student population. In 2019 The net income per household among Israelis of Ethiopian descent amounted to 14,027 NIS compared to 17,779 NIS for all Jewish households and 11,810 NIS for Arab households. Ethiopian Beta Israel have gradually integrated into Israeli society in religious life, national and military service, education, and politics. Like other immigrant communities, they faced significant challenges in assimilating into a modern industrial nation. These included low formal education levels, disrupted traditional community hierarchies, extreme racial prejudice, and halachic doubts over the Jewish status of groups like the Falash Mura. However, over successive generations, Ethiopian Israelis have succumbed these challenges and made increasingly high socioeconomic progress. Before the mass aliyot of the 1980s and 1990s, small groups of Beta Israel had already moved to Israel in the 1950s as part of educational exchanges, and in trickles in the following decades. After Rabbi Ovadia Yosef’s 1973 halachic ruling, the Israeli government formally recognized them as eligible for aliyah under the Law of Return. In preparation for the absorption of tens of thousands, the Ministry of Immigrant Absorption issued two detailed Master Plans: in 1985 (after Operation Moses) and 1991 (following Operation Solomon). These plans addressed housing, education, employment, and social services, but a strong assimilationist bias and inadequate cultural adaptation led to mixed outcomes. Employment-wise, most early immigrants came from rural subsistence economies and lacked job skills, but military service has since become a key vehicle for social advancement. IDF enlistment rates among Ethiopian Israelis are high-especially among Israeli-born youth, and an increase of educational outcomes and the social economic status within the second to third generations. The majority of Ethiopian-Israelis identify mainly with right-wing parties (mainly with Shas and the Likud party, but also with Religious Zionist Party) and the political center (including Kadima and Blue and White parties). A 2001 study found that many of them were reluctant to vote for declared left-wing parties such as Meretz and Israeli Labor Party, which is an incarnation of Mapai, which is identified with the period of denial of the right of return, and in particular due to the controversial political and security positions of the political left in Israel. Racism in Israel In May 2015, The Jewish Daily Forward described the Ethiopian Jewish community in Israel as one that has "long complained of discrimination, racism, and poverty". The absorption of Ethiopians in Israeli society represents an ambitious attempt to deny the significance of race. Israeli authorities, aware of the situation of most African diaspora communities in other Western countries, hosted programs to avoid setting in patterns of discrimination. The Ethiopian Jewish community's internal challenges have been complicated by perceived racist attitudes in some sectors of Israeli society and the establishment. Demonstrations in Israel have occurred protesting the alleged racism against Ethiopian immigrants. In April 2015, an Ethiopian IDF soldier was the victim of an unprovoked and allegedly racist attack by an Israeli policeman and the attack was caught on video. The soldier, Damas Pakedeh, was arrested and then released, after being accused of attacking the policeman. Pakadeh is an orphan who emigrated from Ethiopia with his siblings in 2008. He believes the incident was racially motivated, and that, if the video had not been taken, he would have been punished. Instead, the police officer and volunteer were suspended pending an investigation. Likud MK Avraham Neguise called on National Police Chief Yohanan Danino to prosecute the police officer and volunteer, saying they engaged in "a gross violation of the basic law of respecting others and their liberty by those who are supposed to protect us". The Jerusalem Post notes that in 2015, "there have been a series of reports in the Israeli press about alleged acts of police brutality against Ethiopian Israelis, with many in the community saying they are unfairly targeted and treated more harshly than other citizens". The incident of police brutality with Pakedeh and alleged brutality of officials from Israel's Administration of Border Crossings, Population, and Immigration with Walla Bayach, an Israeli of Ethiopian descent, brought the Ethiopian community to protest. Hundreds of Ethiopians participated in protests the streets of Jerusalem on April 20, 2015, to decry what they view as "rampant racism" and violence in Israel directed at their community. Israel Police Commissioner Yohanan Danino met with representatives of the Israeli Ethiopian community that day following the recent violent incidents involving police officers and members of the community. When over a thousand people protested police brutality against Ethiopians and Mizrahi Israelis, Prime Minister Benjamin Netanyahu announced: "I strongly condemn the beating of the Ethiopian IDF soldier, and those responsible will be held accountable." Following protests and demonstrations in Tel Aviv that resulted in violence, Prime Minister Benjamin Netanyahu planned to meet with representatives of the Ethiopian community, including Pakado. Large protests broke out in July 2019 after Solomon Teka, a young Ethiopian man, was shot and killed by an off-duty policy officer, in Kiryat Haim, Haifa, in northern Israel. On 8 December 2012, the Israeli Educational Television program Vacuum, hosted by Gal Gabbai, aired a report claiming that in 2004, female Ethiopian Jewish immigrants were coerced into receiving Depo‑Provera injections in transit camps in Ethiopia. They were told it was a prerequisite for immigration and often misled into believing that it was a vaccination rather than birth control. The practice was first reported in 2010 by Isha le'Isha (Hebrew: Woman to Woman), an Israeli women's rights organization. Hedva Eyal, the author of the report, stated: "We believe it is a method of reducing the number of births in a community that is black and mostly poor." Haaretz criticized international coverage of the issue, stating that although some Ethiopian Jewish women's procreational rights had been violated through medical malpractice, these effects would only last for three months, and that any claims of a state-sponsored sterilization were falsehoods warped by circular reporting. The newspaper also issued a correction to their earlier reporting on the story. A 2016 investigation into the claims of the 35 women found no evidence that forced birth control injections of Ethiopian Jews had taken place. In a subsequent independent study, the decline in fertility rate was shown to be "the product of urbanization, improved educational opportunities, a later age of marriage and commencement of childbirth and an earlier age of cessation of childbearing." Religion Haymanot refers to the religious practices of the Beta Israel. In Geʽez, Tigrinya and Amharic, means 'religion' or 'faith'. Thus in modern Amharic and Tigrinya, can refer to the Christian haymanot, the Jewish haymanot or the Muslim haymanot. In Israel, the term is only associated with Judaism. The holiest book of the community the Orit "Law" (Imp Aramaic: אורית) includes all the books in the Hebrew Bible and some Deuterocanonical books. The observance of Shabbat was seen as one of the holiest day's of the week for the Beta Israel, and the sanctity of the day was very great, it is also seen as the holiest holiday according to the community, due to it being the first of the list of sacred dates in the Book of Leviticus. And among Shabbat is the only one that is included in the Ten Commandments. The community made preparations for Shabbat on Thursday, washed clothes, cleaned the house, ground flour for baking dabuh (Challah). On Friday afternoon, the family ceased all housework and outside work, and everyone went down to the river to wash their clothes and themselves. They dressed in white and returned to their villages. After sundown, everyone gathered in the synagogue for Shabbat services. The kohanim arrived at the synagogue first, before the rest of the community. After the Shabbat service ended, the participants returned home and recited blessings. Amongst the Jews, there was no defined time for Shabbat so people began Shabbat when sunset was approaching. Members of the community did not light fires for Shabbat as mentioned by Obadiah of Bertinoro similarly to the Karaites but it was permitted if it was left before but it would be violating the sanctity of Shabbat if it was used, thus Ethiopians ate cold food during Shabbat, however the Radbaz writes, this should not be considered a Karaite practice, since their intentions are different therefore it is not heretical. According to Ethiopian Halacha the ban on doing any kind of work on Shabbat is absolute, and cannot be overruled. The Tazeta Sanbat says: For six days you will work, and the seventh will be for Hashem, your Lord, your God. You will do no labor on this day, neither you, nor your wives, nor your sons, nor your daughters, nor your mothers, nor your beasts, nor the foreigners, nor the horses that are with you. One who violates this day should die. One who lies with his wife on Shabbat should die. . . . Anyone who sells or purchases on Shabbat should die. One who draws water from a fresh water source, one who fights, one who curses or swears on this day should die. One who did not prepare his food or drink, and one who did not sanctify the Shabbat should die. One who carries something, one who takes something out of his tent, and one who takes something in from outside, should die. You should not do any work on Shabbat. . . . Any person who works on my Shabbatot, who travels on the road, who works in the field or at home, who lights a fire, or who stands in the sun should die. . . . One who rides a beast, one who sails on a boat should die. . . . As for women, do not clean, do not cook, do not draw water, do not grind in a mortar, do not shout. Beta Israel kashrut law is based mainly on the books of Leviticus, Deuteronomy, and Jubilees. Leviticus 11:3–8 and Deuteronomy 14:4–8 list permitted and forbidden land animals and their signs. Leviticus 11:13–23 and Deuteronomy 14:12–20 list forbidden birds. Leviticus 11:9–12 and Deuteronomy 14:9–10 list the signs of permitted fish. Insects and larvae are forbidden in Leviticus 11:41–42. Gid hanasheh is forbidden in Genesis 32:33. Mixtures of milk and meat are not prepared or eaten, but benefiting from them is permitted. Haymanot use a literal interpretation of the verses Exodus 23:19, Exodus 34:26, and Deuteronomy 14:21, "shalt not seethe a kid in its mother's milk", similar to Karaite Judaism; whereas, under Rabbinic Judaism, any benefit from mixing dairy products with meat is banned. Ethiopian Jews were forbidden to eat the food of non-Jews. A Kahen eats only meat he has slaughtered himself, which someone else may prepare. Someone else may also eat meat that a Kahen has slaughtered. Those who break these taboos are ostracized and must undergo a purification process that includes fasting for one or more days, eating only uncooked chickpeas provided by the Kahen, and ritual purification, before entering the village. Unlike other Ethiopians, Beta Israel do not eat raw meat dishes such as kitfo or gored gored. In the Beta Israel community, the practice of circumcision had traditionally been carried out by women, which was practiced during the Second Temple period This ceremony took place on the eighth day following the birth of the child. A man would only perform the circumcision if no woman skilled in the procedure was available. The involvement of women in this ritual is linked to the concept of ritual impurity associated with childbirth, as the mother is considered impure according to Torah law at the time of delivery. Consequently, the newborn is also deemed impure. To address this, a midwife, who is already in a state of impurity, would conduct the circumcision, avoiding the unnecessary impurity that a man would incur by participating in the process. Following the destruction of the Destruction of the Second Temple, the significance of women as circumcised individuals diminished. Circumcision typically occurs in the "maternity home" where the mother resides, and the celebration of the child's birth, along with the naming and selection of a godfather, is contingent upon the mother's purification after forty days. This custom was first documented in the 9th century by Eldad ha-Dani. "On the eighth day, the child will be circumcised by women." In 1986, a more detailed halakhic inquiry was published, in which it was found that the circumcision and circumcision procedure used by Ethiopian Jews was consistent with Rabbinical law. The laws of purity and impurity in Beta Israel were strict, and strictly adhered to them. The Beta Israel treated their place of residence as a holy city and that they served in it like the Essenes. The people of Beta Israel used to immerse after becoming defiled by any impurity: impurity from a dead person, a woman who had given birth, a woman who had a miscarriage, impurity from an insect, a carcass, and after any contact with a gentile, his utensils, or his food. Many of them even practiced immersion after stepping on the feces of an animal or an unclean animal. In addition, members of the community used to immerse themselves on Fridays and on holidays and festivals in order to be as pure as possible. The frequent use of water by the community for baptism led to their being nicknamed by the surrounding non-Jews as "those who smell water", as well as to the establishment of their settlements near water sources. At the beginning of the Niddah (Menstruation), a woman was not allowed to be in her home and moved for seven days to a newly constructed dwelling called the "blood dwelling" ("Yadam Gojo"). The building was surrounded by stones to mark this as a place of Niddah. During this time, women were not allowed to be seen and the food was served with special utensils that were then purified or broken. At the end of the niddah, the woman would ritually immerse and return to her home. The source of the Halacha of the Niddah is from the book of Leviticus 19, verse 15: "And if a woman be unclean, the blood of her uncleanness shall be in her flesh; she shall be in her separation seven days, and whosoever touch her shall be unclean until then.” Mäṣḥafä Kedus (lit. "Holy Scriptures") is the name for the religious literature of the Beta Israel. These texts are written in Geʽez, which is also the liturgical language of the Ethiopian Orthodox Church. The holiest book is the Orit "Torah" (Aramaic) among Ethiopian Jews. The Orit of Ethiopian Jews includes all the books recognized as canonical by other Jewish communities, as well as non-canonical books. Deuterocanonical books that make up part of the Beta Israel canon are the Book of Sirach, Book of Judith, Esdras 1 and 2, the Books of Meqabyan, Book of Jubilees, Book of Baruch (including 4 Baruch), Book of Enoch, and the Testaments of Abraham, Isaac, and Jacob. Many of these books differ substantially from the similarly numbered and named texts in Koine Greek and Hebrew (such as the Book of Maccabees), though some of the Ge'ez works are dependent on those texts. Others appear to have different ancient literary and oral origins. Ethiopian Orthodox Christians also use many texts used by the Beta Israel.[citation needed] Essential non-Biblical writings include the Mota Aron ("Death of Aaron"), Mota Musé ("Death of Moses"), Nagara Muse ("The Conversation of Moses"), Təʾəzazä Sänbät ("Commandments of the Sabbath"), Arde'et ("Disciples"), Gorgoryos("Apocalypse of Gorgorios"), Ezra ("Apocalypse of Ezra"), Barok ("Apocalypse of Baruch"), Mäṣḥafä Sa'atat("Book of Hours"), Fālasfā ("Philosophers"), Abba Elias ("Father Elijah"), Mäṣḥafä Mäla'əkt ("Book of Angels"), Dərsanä Abrəham Wäsara Bägabs ("Homily on Abraham and Sarah in Egypt"), Gadla Sosna ("The Story of Susanna"), and Baqadāmi Gabra Egzi'abḥēr ("In the Beginning God Created"). The centre of life was in the Mesgid, the word Mesgid comes from the word "worship", the word was also used by the ancient Jews of the Elephantine as the term for the synagogue. In a mosque in the town of Tan'am in the eastern Sana'a Governorate of Yemen, where a Jewish community had existed since the destruction of the Second Temple, a pillar in secondary use was found engraved with the inscription "Amen, Amen, and this house will be a place of worship" in letters in the Sabaic language. Researchers Walter Müller and Wolf Leslau concluded from this that the word Mesgid, derived from the root Sged, which means bow in Aramaic, with the meaning of a house. The word has no connection with the Arabic Masjid which refers to a mosque. Other names for the Mesgid was also called bet meqdas(Holy house) or the ṣalot bet (Prayer house). Before entering the mesgid, it was to remove the shoes before entering, a custom that was practiced by Jews in ancient times. There were also no chair's in the mesgid, and people laid carpets and sat on the floor. though this practice is no longer in use today, many non-Ashkenazic communities sat on the floor in the synagogue. The Rambam also provided similar insight in his Mishneh Torah that this was the common practice amongst Eastern Jews. The Beta Israel calendar is a lunar calendar of 12 months, each 29 or 30 days alternately. Every four years, there is a leap year which adds a full month (30 days). The calendar is a combination of the ancient calendar of Alexandrian Jewry, Book of Jubilees, Book of Enoch, and the Ethiopian calendar. The years are counted according to the counting of Kushta: "1571 to the Christians 7071 to the Gyptians, and 6642 to the Hebrews"; according to this counting, the year 5771 (Hebrew: ה'תשע"א) in the Rabbinical Hebrew calendar is the year 7082 in this calendar. The monthly calendar are in Hebrew and are consistent with the Rabbinic calendar and holidays fall in line with the Rabbinic calendar than the Ethiopic calendar. A Pentecontad calendar, an ancient calendar that is attested in the Dead Sea Scrolls, is still in use among the Ethiopian Jews. In this calendar, each fifty-day period is made up of seven weeks of seven days, then an extra fiftieth day, known as the atzeret (meaning "assembly" or "day of assembly" in Hebrew). According to this Beta Israel tradition: "The Sabbaths are divided into cycles of seven. A special prayer, is recited at sunset and reflects the particular characteristics of each Sabbath. The seventh Sabbath, the Legatä Sänbät, is the holiest of all, and there are extra prayers, festivities and a special sanctification service." In addition to the slaughter of the holy animals, it was customary in the Beta Israel community to sacrifice a burnt offering, a sin offering, and a Passover sacrifice on an altar called an Avod, which was placed on the northern side of the Mesgid compound. Instead of a birth offering, the woman in labor would bring a bread offering and sheaves – forty for a male child, eighty for a female child – with which the priest would lightly whip her as part of her purification ritual. The slaughtering of a red heifer was also customary, the last red heifer was slaughtered in 1952 by Abba Yitzhak. The prophet Zephaniah said after the destruction of the First temple, offerings of sacrifice will continue to be offered by Jews across the River Kush (modern-day Ethiopia) in Book of Zephaniah, chapter 3, verse 10: "From beyond the River Kush, my servants, daughters of my dispersed ones, will bring me my offering." The custom of offering sacrifices was officially abolished by the priests of the community with the establishment of the State of Israel. After the mass immigration of the community most of the community have adopted Sephardic law and customs and have been included under the oversight of Israel's already broad Sephardic Chief Rabbinate. The current Ethiopian Chief Rabbi is Reuven Wabashat. Culture According to a manuscript of the community, Hebrew began to disappear as a liturgical language, starting in the mid-16th century, until it disappeared completely at the beginning of the 17th century. Some sub-groups in the Beta Israel also once spoke Qwara and Kayla, both of which are Agaw languages. Now, they speak Tigrinya and Amharic both Semitic languages. Their liturgical language is Geʽez, also Semitic. Since the 1950s, they have taught in their schools. Those Beta Israel residing in the State of Israel now use Modern Hebrew as a daily language. Ethiopian Jews have introduced several traditional dishes that are now popularly eaten by Israelis of all backgrounds, among them the staple of injera, a spongy, sour flatbread made from teff flour, which serves as both a food and a utensil in Ethiopian meals. Other popular dishes include traditional stews such as wat, shiro (a seasoned chickpea or lentil puree), as well as Ethiopian coffee. Dough foods in the kitchen include various types of bread such as dabo, himbasha, amiza (maswait) and the birakat, which are eaten on Shabbat and holidays. Beta Israel cuisine also includes several porridge such as genfo and kancha made from barley, aja made from oats, and ailwat made from fenugreek. Drinks include tej made from water, honey, and geshu; katikala (arak) made from water, flax, and honey; biraz made from water, honey, and ginger; and shai (Abyssinian tea). Ethiopian Jewish men wore a white tunic and loose-fitting trousers akin to Breeches worn by Beta Israel men. After reaching the age of 13, boys cover their heads with a white head wrap, and married individuals wore a Tallit over their heads. Wealthier individuals would wear a jacket resembling a Kaftan over this attire. Ethiopian Jewish women wore a white robes known as kemis, which was often embellished with floral patterns and other designs, along with a head covering. The prominence of white in their clothing is attributed to one of their many trades involving cotton weaving. Due to the legislation that legalized the cessation and loss of land ownership in the 15th century, many members of Beta Israel were compelled to engage in various forms of labor, such as, Gold/Silversmithing, Craftsmanship, Artisans, cotton weaving, Ironworking and notably Blacksmithing. The community also demonstrated consideraerable expertise in blacksmithing, producing weaponry for the Imperial army. Over time, this led local Christians to suspect that the community was involved in witchcraft, given their frequent proximity to fire. In the 18th century, Empress Mentewab and her son, Emperor Iyasu II, employed Jewish builders and artists to construct and design opulent structures within the Fasil Ghebbi complex. After receiving money for their crafts from their non-Jewish neighbours, members of the community would place the coins in a bowl of water, and immerse in a spring or a stream and collect the money afterwards maintain strict ritual purity. Interactions with Israeli culture The arrival of the Beta Israel community into the modern State of Israel marked a profound meeting of ancient traditions with contemporary national culture. Their first interactions with Israeli society in the 1980s and early 1990s were shaped by a complex mix of hope, disillusionment, and cultural shock. Operations such as Operation Moses (1984) and Operation Solomon (1991) brought tens of thousands of Ethiopian Jews to Israel, fulfilling a long-held dream of return to the ancestral homeland. However, the reality they encountered in Israel often clashed with their expectations and religious identities. One of the main challenges of the community was coming to the reality of adapting to a secular country away from their religiously centred communal life which were focused on religious purity and religious authority which often came in conflict with the early left-government's policy. One such conflict was when many people from the community tried to make extensions of their homes for separate dwellings when women were in Niddah (menstruation), which was disapproved by the government, which also made challenges for communal life for example letting their husbands take care of the home and children one week a month when the mother was in the dwelling. Such differences in the older generations lives have made it difficult for them to fully integrate into Israeli culture, however has not posed difficulty for life in Israel. On the other hand, the new generations have quickly merged into Israeli culture, A study by the Taub Center found that Ethiopian‑Israeli youth who were born in Israel or immigrated at a young age show much higher levels of high‑school and are far ahead of those who arrived later. For example: “The high school graduation rate among Ethiopian Israelis who moved to Israel at an older age is only 36%. In comparison, the rate of high school graduates among those educated in Israel is above 95%.” A survey of those who have lived in Israel for 20+ years found that a large majority feel “at home” in Israel, and many have friends outside their own community, speak Hebrew in different social contexts, serve in the military or national service, and vote in elections. In addition it is reported that enlistment rates among Ethiopian‑Israeli youth are high, and some have attained relatively high ranks in the Israel Defense Forces. Over the past few decades, Ethiopian Israelis have begun to leave a powerful imprint on Israel's cultural landscape, a new wave of artists has emerged, blending traditional Ethiopian sounds with contemporary genres like pop, soul, funk, hip-hop, and jazz such as Teddy Neguse. The duo Strong Black Coffee are a popular duo band from Hadera, One of the duo's hit songs is "Ihiye Beseder" ("It'll Be Alright"; with a cheerful melody and appealing chorus; lyrics: "I know everything will be okay / Never mind what others say privately / We’ll manage with the help of God"; over 27 million views). The duo was also a nominee for the 2015 MTV Europe Music Awards. Other musicians like Hagit Yaso was the winner of the ninth season of Kokhav Nolad. Yaso has toured and performed internationally. In January 2013, she toured in several cities across the US sponsored by Jewish National Fund. The highlight of her tour was a performance at the Saban Theatre in Los Angeles, where she performed a duet of "Killing Me Softly" with the song's writer Charles Fox. Another musician Eden Alene had won the seventh season of the singing competition HaKokhav HaBa, she represented Israel in the Eurovision Song Contest 2021 with her song "Set Me Free" the song notably includes a B6 whistle note, the highest note in the history of the Eurovision Song Contest. Israeli politicians of Ethiopian Jewish descent include MK Pnina Tamano-Shata who was the Minister of Aliyah and Integration for Blue and White (political party), MK Avraham Neguise for Likud, and founder of Atid Ehad a centre-right party which represented the concerns for the Ethiopian-Israeli community. Former Deputy-Mayor of Tel Aviv, Mehereta Baruch-Ron. MK Moshe Solomon for Mafdal-Religious Zionism. And American-Israeli politician Mazi Melesa Pilip, a legislator for the Nassau County Legislature. Yityish Titi Aynaw an Israeli model, television personality, and beauty pageant titleholder who was crowned Miss Israel 2013. Eli Dasa is an Israeli footballer who is the current Captain for Israel national football team. and Branu Tegene who works as a reporter for Hevrat HaHadashot and a war correspondent during the Iran–Israel war. In popular culture National memorials to the Ethiopian Jews who died on their way to Israel are located in Kiryat Gat and at the National Civil Cemetery of the State of Israel in Mount Herzl in Jerusalem. In 2009, plans to establish an Ethiopian Heritage Museum dedicated to the heritage and culture of the Ethiopian Jewish community were unveiled in Rehovot. The museum will include a model of an Ethiopian village, an artificial stream, a garden, classrooms, an amphitheater, and a memorial to Ethiopian Zionist activists and Ethiopian Jews who died en route to Israel. Related groups Beta Abraham Ge'ez: Bēta Avreham, "House of Abraham")—other terms by which the community have been known include Tebiban ("possessor of secret knowledge"), Balla Ejj (Ge'ez: "Craftsmens"), Buda (Ge'ez: "evil eye"). The earliest reference to the Jewish community in the historical region of Shewa comes from the 13th century missionary Zena Marqos. More Jews arrived in the region of Shewa from the regions of Fogera and Dembiya during the rule of Negasi Krestos and as a result a first wave of Jewish immigration began in the years 1692–1702. The Beta Israel, like their non-Jewish neighbours back in Ethiopia, were owners of "black" slaves—a fact that group members have not shared with outsiders in Israel. Slavery was practiced in Ethiopia as in much of Africa until it was formally abolished in 1942. When a person was enslaved by a Jew, they went through conversion (giyur) to Judaism. The Barya's status is not specific to Beta Israel, and is found in other communities in northern Ethiopia. The Baryas descend from servants bought on the ancient slave markets of the Horn of Africa, and converted to the religion of their masters. They are considered "Black" (t'equr, or shanqilla) by the Beta Israel, who perceive themselves as qey ("red") or t'eyem ("brown"), having lighter skin and more "Middle Eastern" facial features than those of the populations inside the continent. The Baryas were not allowed to join the congregation and were not allowed inside the synagogues and had to stay in the courtyard. Ownership passed down through the master's family for many generations. Intermarriage with them was prohibited, and a mixed descendant was considered a slave. Genetics Several DNA studies have been done on the Beta Israel. According to Cruciani et al. (2002), haplogroup A is the most common paternal lineage among Ethiopian Jews. The clade is carried by around 41% of Beta Israel males and are primarily associated with Nilo-Saharan and Khoisan-speaking populations. However, the A branches carried by Ethiopians Jews are principally of the A-Y23865 variety, which formed about 10,000 years ago and is localized to the Ethiopian highlands and the Arabian peninsula. The difference with some Khoisan is 54,000 years, and with others 125,000 years. Around 18% of Ethiopian Jews are bearers of E-P2 (xM35, xM2); in Ethiopia, most of such lineages belong to E-M329, which has been found in ancient DNA isolated from a 4,500 year old Ethiopian fossil. Such haplotypes are frequent in Southwestern Ethiopia, especially among Omotic-speaking populations. The rest of the Beta Israel mainly belong to haplotypes linked with the E-M35 and J-M267 haplogroups, which are more commonly associated with Ethiosemitic and Cushitic-speaking populations in Northeast Africa. Further analysis show that the E-M35 carried by Ethiopian Jews is primarily indigenous to the Horn of Africa rather than being of Levantine origin. Altogether, this suggests that Ethiopian Jews have diverse patrilineages indicative of indigenous Northeast African, not Middle Eastern, origin. A 2011 mitochondrial DNA study focused on maternal haplogroups sampling 41 Beta Israel found them to carry 51.2% macro-haplogroup L typically found in Africa. The remainder consisted of Eurasian-origin lineages such as 22% R0, 19.5% M1, 5% W, and 2.5% U. However, no identical haplotypes were shared between the Yemenite and Ethiopian Jewish populations, suggesting very little gene flow between the populations and potentially distinct maternal population histories. The maternal ancestral profile of the Beta Israel is similar to those of highland Ethiopian populations, such as the Amhara. The Ethiopian Jews' autosomal DNA has been examined in a comprehensive study by Tishkoff et al. (2009) on the genetic affiliations of various populations in Africa. According to Bayesian clustering analysis, the Beta Israel generally grouped with other Ethiosemitic and Cushitic-speaking populations inhabiting the Horn of Africa. A 2010 study by Behar et al. on the genome-wide structure of Jews observed that "Ethiopian Jews (Beta Israel) and Indian Jews (Bene Israel and Cochini) cluster with neighbouring autochthonous populations in Ethiopia and western India, respectively, despite a clear paternal link between the Bene Israel and the Levant. These results cast light on the variegated genetic architecture of the Middle East, and trace the origins of most Jewish Diaspora communities to the Levant." According to the study of Behar et al. Ethiopian Jews are clustered with the Ethiosemitic-speaking Amhara and Tigrayans rather than the Oromos. The Beta Israel are autosomally closer to other populations from the Horn of Africa than to any other Jewish population, including Yemenite Jews. A 2012 study by Ostrer et al. concluded that the Ethiopian Jewish community was founded about 2,000 years ago, probably by only a relatively small number of Jews from elsewhere, with local people joining the community, causing Beta Israel to become genetically distant from other Jewish groups. Falash Mura The original term that the Beta Israel gave to the converts was "Faras Muqra" ("horse of the raven") in which the word "horse" refers to the converts and the word "raven" refers to the missionary Martin Flad who used to wear black clothes. This term derived the additional names Falas Muqra, Faras Mura and Falas Mura. In Hebrew the term "Falash Mura" (or "Falashmura") is probably a result of confusion over the use of the term "Faras Muqra" and its derivatives and on the basis of false cognate it was given the Hebrew meaning Falashim Mumarim ("converted Falashas"). The actual term "Falash Mura" has no clear origin. It is believed that the term may come from the Amharic and means "someone who changes their faith." For years, Ethiopian Jews were unable to own land and were often persecuted by the Christian majority of Ethiopia. Ethiopian Jews were afraid to touch non-Jews because they believed non-Jews were not pure. They were also ostracized by their Christian neighbors. For this reason, many Ethiopian Jews converted to Christianity to seek a better life in Ethiopia. In 1860, Henry Aaron Stern, a Jewish convert to Christianity, traveled to Ethiopia in an attempt to convert the Beta Israel community to Christianity. The Jewish Agency's Ethiopia emissary, Asher Seyum, says the Falash Mura "converted in the 19th and 20th century, when Jewish relations with Christian rulers soured. Regardless, many kept ties with their Jewish brethren and were never fully accepted into the Christian communities. When word spread about the aliyah, many thousands of Falash Mura left their villages for Gondar and Addis Ababa, assuming they counted." In the Achefer woreda of the Mirab Gojjam Zone, roughly 1,000–2,000 families of Beta Israel were found. There may be other such regions in Ethiopia with significant Jewish enclaves, which would raise the total population to more than 50,000 people.[citation needed] The Falash Mura did not refer to themselves as members of the Beta Israel, the name for the Ethiopian Jewish community, until after the first wave of immigration to Israel. Beta Israel by ancestry, the Falash Mura believe they have just as much of a right to return to Israel as the Beta Israel themselves. Rabbi Ovadiah Yosef, a major player in the first wave of Beta Israel immigration to Israel, declared in 2002 that the Falash Mura had converted out of fear and persecution and therefore should be considered Jews. Today, Falash Mura who move to Israel must undergo conversion on arrival, making it increasingly more difficult for them to get situated into Israeli society. The Beta Israel who immigrated and made Aliyah through Operation Moses and Operation Solomon were not required to undergo conversion because they were accepted as Jews under the Law of Return. On February 16, 2003, the Israeli government applied Resolution 2958 to the Falash Mura, which grants maternal descendants of Beta Israel the right to immigrate to Israel under the Israeli Law of Return and to obtain citizenship if they convert to Judaism. In 1991, the Israeli authorities announced that the emigration of the Beta Israel to Israel was about to conclude, because almost all of the community had been evacuated. Nevertheless, thousands of other Ethiopians began leaving the northern region to take refuge in the government controlled capital, Addis Ababa, who were Jewish converts to Christianity and asking to immigrate to Israel. As a result, a new term arose which was used to refer to this group: "Falash Mura". The Falash Mura, who were not part of the Beta Israel communities in Ethiopia, were not recognized as Jews by the Israeli authorities, and were therefore not initially allowed to immigrate to Israel, making them ineligible for Israeli citizenship under Israel's Law of Return. As a result, a lively debate has arisen in Israel about the Falash Mura, mainly between the Beta Israel community in Israel and their supporters and those opposed to a potential massive emigration of the Falash Mura people. The government's position on the matter remained quite restrictive, but it has been subject to numerous criticisms, including criticisms by some clerics who want to encourage these people's return to Judaism. During the 1990s, the Israeli government finally allowed most of those who fled to Addis Ababa to immigrate to Israel. Some did so through the Law of Return, which allows an Israeli parent of a non-Jew to petition for his/her son or daughter to be allowed to immigrate to Israel. Others were allowed to immigrate to Israel as part of a humanitarian effort. The Israeli government hoped that admitting these Falash Mura would finally bring emigration from Ethiopia to a close, but instead prompted a new wave of Falash Mura refugees fleeing to Addis Ababa and wishing to immigrate to Israel. This led the Israeli government to harden its position on the matter in the late 1990s. In February 2003, the Israeli government decided to accept Orthodox religious conversions in Ethiopia of Falash Mura by Israeli Rabbis, after which they can then immigrate to Israel as Jews. Although the new position is more open, and although the Israeli governmental authorities and religious authorities should in theory allow immigration to Israel of most of the Falash Mura wishing to do so (who are now acknowledged to be descendants of the Beta Israel community), in practice, however, that immigration remains slow, and the Israeli government continued to limit, from 2003 to 2006, immigration of Falash Mura to about 300 per month.[citation needed] In April 2005, The Jerusalem Post stated that it had conducted a survey in Ethiopia, after which it was concluded that tens of thousands of Falash Mura still lived in rural northern Ethiopia.[citation needed] On 14 November 2010, the Israeli cabinet approved a plan to allow an additional 8,000 Falash Mura to immigrate to Israel. On November 16, 2015, the Israeli cabinet unanimously voted in favor of allowing the last group of Falash Mura to immigrate over the next five years, but their acceptance will be conditional on a successful Jewish conversion process, according to the Interior Ministry. In April 2016, they announced that a total of 10,300 people would be included in the latest round of Aliyah, over the following 5 years. By May 2021 300 Falasha had been brought to Israel joining 1,700 who had already immigrated; an estimated 12,000 more are in Ethiopia. Today, both Israeli and Ethiopian groups dispute the Falash Mura's religious and political status. The Israeli government fears that these people are just using Judaism as an excuse to leave Ethiopia in efforts to improve their lives in a new country. Right-wing member of the Israeli Knesset Bezalel Smotrich was quoted saying, "This practice will develop into a demand to bring more and more family members not included in the Law of Return. It will open the door to an endless extension of a family chain from all over the world," he wrote, according to Kan. "How can the state explain in the High Court the distinction it makes between the Falashmura and the rest of the world?" Although the government has threatened to stop all efforts to bring these people to Israel, they have still continued to address the issue. In 2018, the Israeli government allowed 1,000 Falash Mura to immigrate to Israel. However, members of the Ethiopian community say the process for immigration approval is poorly executed and inaccurate, dividing families. At least 80 percent of the tribe members in Ethiopia say they have first-degree relatives living in Israel, and some have been waiting for 20 years to immigrate. Notable Beta Israelis Affiliated groups See also Notes References Further reading General Early accounts History Religion Aliyah Society External links |
======================================== |
[SOURCE: https://www.theverge.com/streaming/879129/netflix-star-search-live] | [TOKENS: 6627] |
StreamingCloseStreamingPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All StreamingTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechCreatorsCloseCreatorsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All CreatorsA Star is bornOne night in the audience of Netflix’s most ambitious live show yet.by David PierceCloseDavid PierceEditor-at-LargePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by David PierceFeb 16, 2026, 11:00 AM UTCLinkShareGift Image: Cath Virginia / The Verge, TudumDavid PierceCloseDavid PiercePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by David Pierce is editor-at-large and Vergecast co-host with over a decade of experience covering consumer tech. Previously, at Protocol, The Wall Street Journal, and Wired.Two hours before one of the most important live events in Netflix’s history, LA’s sprawling CBS Radford Studio Center is abuzz with the strangest combination of things. As I walk through one cavernous soundstage on a Tuesday afternoon, I hear multiple people warming up their voices to sing. I pass a man carefully waving a hair dryer in front of a piano. Outside, a man and a woman, both in scant black leather, walk past with a wave. I’m told they’re aerialists. Somewhere in this enormous rehearsal space, there’s also a 74-year-old budding standup comedian, an 11-year-old gospel singer, and a dancing border collie.Such is the wondrous, bizarre versatility of Star Search, which on this January evening will start a five-week run of live shows streamed to Netflix’s 300 million-plus subscribers. It’s a reboot of one of TV’s most successful talent shows, though your average Stranger Things fan has likely never heard of it. It also represents one of Netflix’s biggest programming bets ever. The company has spent years building the technical infrastructure to stream all over the world, while also quietly testing show formats and interactive features to see how it might tweak a competition show to make sense in a world overrun by social media.The company has done huge one-off events, like a boxing match between Jake Paul and Mike Tyson; it has done live shows, like Everybody’s Live with John Mulaney. It has hit shows and blockbuster movies, and the ability to turn a decade-old show nobody’s ever heard of into a No. 1 hit. It has wrestling and even some of the NFL. But Netflix has never really tried to do the hardest thing in TV: get its audience to show up, night after night and week after week, to build a simple talent show into an undeniable star-making machine.When shows like this work, they become cultural events in their own right. Think of American Idol, The Voice, whatever Got Talent you like — all are among the most popular and long-lasting franchises you’ll find anywhere. “There are those formats that are undeniable,” Brandon Riegg, Netflix’s head of unscripted programming, tells me. He sees American Idol as the genre’s true gold standard. “We’ll never see another Idol, in terms of the gap between Idol and the second-place show,” he says, “but we can certainly try to say, ‘What’s the next iteration?’”The next iteration is about to take place in the next soundstage I enter, which I do cautiously, so as not to run into the enormous pallet filled with every imaginable size and shape of cactus. The Star Search studio is enormous, with bright screens along one wall and a lit-up star at least twice my height casting a golden glow around the room. There are three stages in the room, plus an elevated platform in the center from which the show’s celebrity judges will watch the night’s proceedings. The only hints as to what’s coming tonight: a single chair on one stage, and a tall platform on another that is perfectly suited for falling off to your death in front of millions of people.With about 90 minutes to air, 450 select audience members are finally allowed to leave the parking garage they’ve been waiting in, and begin to file into the studio. Most don’t know quite what to expect, other than some good celebrity sightings and maybe a chance to be on Netflix themselves. Meanwhile, the team at Netflix is about to find out whether their platform can mint a superstar.Star Search’s job is to make even amateur performances feel huge. Photo: Kit Karzen / NetflixA few minutes after the evening’s audience takes their seats, the event’s emcee, Chuck Dukas, explains how the show works. Most of the audience seems as confused as I am by his explanation, so allow me to try my own: Star Search is America’s Got Talent in content and Jeopardy! in structure. The show features eight categories, from dance to stand-up to music to magic, four of which are shown in a given episode. Each category has a champion, crowned the best of the previous episode, and a challenger, there to knock the champion off the throne. Both contestants do their thing, the judges each give them a score between one and four, and whoever receives the highest average score advances to the next episode to take on a new challenger. Win enough times, and you’re guaranteed a spot in the season finale, where a single grand champion will win $500,000.Most of the format is borrowed from the original Star Search, which first aired in 1983. It was originally created by Al Masini, a legendary producer who also had a hand in shows from Entertainment Tonight to Baywatch. Masini’s idea was to do an amateur talent show, with carefully chosen acts, presented as glitzily as possible. “We gave them the best lighting available, we gave them the best audio… we put them in a Tiffany setting,” producer Bob Banner said in an interview with the Television Academy years later. “If they didn’t perform well, there’s nothing we could do about it, but we gave them every advantage.”The show ran until 1995 (not including a reboot nobody really remembers), with Ed McMahon as genial host, and featured an astonishing number of now-household names. Britney Spears, Christina Aguilera, Justin Timberlake, Usher, and Beyoncé all sang on the show as young kids; Adam Sandler, Dave Chappelle, and Rosie O’Donnell all competed as comedians; Sharon Stone competed as a “spokesmodel,” a category that mercifully no longer exists, in the show’s very first episode.There is something about the Star Search structure that feels distinctly modern, even four decades after its debut. You can watch a lot of old episodes and clips on YouTube, and it is remarkable just how fast the show moved in those early days. There’s no long windup to each contestant, no heartstring-pulling backstory documentary; there is just a singer and then another singer. An actor and another actor. McMahon slings his arm around the contestant, they get a score, and we’re off to the next one.Ultimately, that format meant that Star Search had nothing else to offer you but performances. Nearly everyone I spoke to said that casting Star Search was the hardest and most important part of the development process, and that while a show like The Voice or American Idol can survive on the strength of its format and cast alone, Star Search lives and dies a performance at a time. “We’re not booking bad acts to get jokes out of it, you know?” says David Friedman, the show’s co-showrunner. “We’re saying, basically, ‘Look, there’s two acts here, we’re looking for a star. Is there one out here that’s a star?’”This turns out to have two distinct advantages for Netflix. The first is a more universal appeal: While senses of humor and cultural references may differ across age groups, borders, and demographics, good singing is good singing to pretty much everyone. “And if you don’t like singing,” Friedman says, “wait five minutes. We’ll have a magician for you.”The second, and maybe more important, is that Star Search functions eerily like a TikTok feed. Look, a dancer! Ooh, magic! Here’s three minutes of stand-up! Time for a cover of your favorite *NSYNC song! Jesse Collins, one of the show’s executive producers, embraces the comparison. “In a way, social media has prepared the world for this,” he says. “You’ll go on TikTok and you’ll fall in love with somebody that does tricks or is a comedian or is a singer or like someone like an IShowSpeed.”The opportunity for Netflix, Collins says, is to do it bigger and better. To take people out of their parking garages, bedrooms, and passenger seats, and put them back in the “Tiffany setting.” Many of the acts on the show’s first season already have large social followings, some in the millions, which seems to stretch the definition of an amateur talent show. But there may be something still uniquely star-making about Netflix.Jeff Gaspin, Netflix’s vice president of unscripted series, says he was convinced of all this as soon as he saw the pitch for the show. A group led by the actress Taraji P. Henson had been developing a reboot for a while, and Gaspin immediately thought it might work for Netflix. But he offered a wrinkle: What if the audience could vote? In the original show, the in-studio audience was occasionally called upon to break the judges’ tie scores, but Gaspin thought maybe the audience should operate as effectively a fourth judge, voting in real time from their couches. And he wanted them to be able to vote with their remotes, right on screen, during the broadcast.Netflix’s product team spent months building a voting system into its apps for mobile devices (where you can vote if you want to), and onto every TV platform and streaming box you can think of. They ran tests with audiences and tested voting in somewhat innocuous places, like a live cooking show hosted by celebrity chef Dave Chang, in which the audience got to rate sandwiches. It all worked, and early testers used it and loved it. So on Star Search, viewers get a pop-up at the end of every performance asking them to rate it from one to five stars, and the average score makes up a quarter of the contestant’s overall rating. Ironically, the only people who don’t get to vote are those of us in the studio audience. We’re under very strict instructions not to use our phones.Eric Adrien Williams, an 11-year-old gospel singer, was the first contestant in the new Star Search. Photo: Greg Gayne / NetflixGoing into the evening, I expected the live voting tools to be on everyone’s minds. If not that, then certainly the uniquely global audience potentially tuning in. But the cast and crew actually seemed more worried about another very Netflix-y thing: This edition of Star Search has no commercial breaks. Commercials, see, are a godsend to the cast and crew. “That’s the time you spend two and a half minutes going, Okay, where are we going next?” Friedman says. “And saying, Oh, the tape’s not ready? Here’s what we’re going to do.” Breaks also give a show its shape, and without them, “there’s no natural ebb and flow anymore,” he says. “It’s just going to have to go.”Ahead of the first show, Friedman has been chatting with Anthony Anderson, the Black-ish star who is Star Search’s new host. “I’m in his ear, and I’m like, ‘Look, you’re on live TV with no commercials. If something is not working for you, own it,’” Friedman says. “We have passed the point in which you have to hide stuff — viewers like to be brought in on the ride.” Part of making live TV, multiple people told me, is being able to embrace the chaos.On this Tuesday evening, at least, there is surprisingly little chaos. With about an hour to spare before the 6PM live start, emcee Chuck announces that we’re going to pretape the magic portion of the competition, for reasons no one ever explains. (This eventually causes some conspiracy theories, as viewers of the “live” show notice some clues that they’re seeing old footage.) Both competitors’ tricks go off without a hitch, everyone goes backstage, and Anderson comes back out to practice his intro to the show. Then he’s banished with three and a half minutes to air, and all that’s left to do is wait.Precisely at the top of the hour, Netflix’s iconic “tudum” sound thunders through the room, Anderson walks out, and the live show commences. Anderson explains to the camera how the show works, has the audience vote on his suit to practice, and kicks off the proceedings. The rest of the show is a blur of singers (Friedman told me just ahead of air that two young kids would be going first, and that yes, he knows it’s kind of cheating to use adorable children to make people like your show), dancers, magicians, and musical groups.There were, of course, plenty of things to work on. The three celebrity judges — Sarah Michelle Gellar, Jelly Roll, and Chrissy Teigen — were all far too nice to the contestants, even when they gave them bad scores. They also all talked way too much. The sound mixing, particularly on the groups, didn’t always actually mix. At one point, Anderson was visibly confused by the teleprompter, which was instructing him to look in three different places for three different scoring numbers. There were some solid performances, but no obvious stars, and a couple of genuine stinkers. Mostly, the whole thing took too long. This supposedly 60-minute show lasted 90. Still, the voting worked, the stream stayed live, and the whole room seemed to exhale as the confetti fell and the lights turned off.This was just the first night. There were seven more to do in the next four weeks, plus a grand finale episode on February 17th. Judging from the early reactions, Star Search wasn’t quite a cultural phenomenon yet, but Friedman says he knows even Netflix can’t create a hit all at once. “We do this for five weeks and hopefully it builds, and hopefully the excitement builds, and we get to the end and there’ll be one champion — and hopefully you discover the next somebody.” Everybody studiously avoids jinxing the next season, because Netflix is famous for canceling and forgetting about projects it deems not to be working. But it’s clear the company hopes this was the first night of many in this studio.Just as we’re all filing out, heading back into a mild LA evening, Dukas the emcee booms back out over the audience telling everyone to take their seats. “Okay!” he says. “We’re going to do the show again in Spanish, and then we’ll get you out of here.”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.David PierceCloseDavid PierceEditor-at-LargePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by David PierceCreatorsCloseCreatorsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All CreatorsNetflixCloseNetflixPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NetflixStreamingCloseStreamingPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All StreamingTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechTikTokCloseTikTokPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TikTokMost PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Streaming Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Creators A Star is born One night in the audience of Netflix’s most ambitious live show yet. Posts from this author will be added to your daily email digest and your homepage feed. See All by David Pierce Posts from this author will be added to your daily email digest and your homepage feed. See All by David Pierce Two hours before one of the most important live events in Netflix’s history, LA’s sprawling CBS Radford Studio Center is abuzz with the strangest combination of things. As I walk through one cavernous soundstage on a Tuesday afternoon, I hear multiple people warming up their voices to sing. I pass a man carefully waving a hair dryer in front of a piano. Outside, a man and a woman, both in scant black leather, walk past with a wave. I’m told they’re aerialists. Somewhere in this enormous rehearsal space, there’s also a 74-year-old budding standup comedian, an 11-year-old gospel singer, and a dancing border collie. Such is the wondrous, bizarre versatility of Star Search, which on this January evening will start a five-week run of live shows streamed to Netflix’s 300 million-plus subscribers. It’s a reboot of one of TV’s most successful talent shows, though your average Stranger Things fan has likely never heard of it. It also represents one of Netflix’s biggest programming bets ever. The company has spent years building the technical infrastructure to stream all over the world, while also quietly testing show formats and interactive features to see how it might tweak a competition show to make sense in a world overrun by social media. The company has done huge one-off events, like a boxing match between Jake Paul and Mike Tyson; it has done live shows, like Everybody’s Live with John Mulaney. It has hit shows and blockbuster movies, and the ability to turn a decade-old show nobody’s ever heard of into a No. 1 hit. It has wrestling and even some of the NFL. But Netflix has never really tried to do the hardest thing in TV: get its audience to show up, night after night and week after week, to build a simple talent show into an undeniable star-making machine. When shows like this work, they become cultural events in their own right. Think of American Idol, The Voice, whatever Got Talent you like — all are among the most popular and long-lasting franchises you’ll find anywhere. “There are those formats that are undeniable,” Brandon Riegg, Netflix’s head of unscripted programming, tells me. He sees American Idol as the genre’s true gold standard. “We’ll never see another Idol, in terms of the gap between Idol and the second-place show,” he says, “but we can certainly try to say, ‘What’s the next iteration?’” The next iteration is about to take place in the next soundstage I enter, which I do cautiously, so as not to run into the enormous pallet filled with every imaginable size and shape of cactus. The Star Search studio is enormous, with bright screens along one wall and a lit-up star at least twice my height casting a golden glow around the room. There are three stages in the room, plus an elevated platform in the center from which the show’s celebrity judges will watch the night’s proceedings. The only hints as to what’s coming tonight: a single chair on one stage, and a tall platform on another that is perfectly suited for falling off to your death in front of millions of people. With about 90 minutes to air, 450 select audience members are finally allowed to leave the parking garage they’ve been waiting in, and begin to file into the studio. Most don’t know quite what to expect, other than some good celebrity sightings and maybe a chance to be on Netflix themselves. Meanwhile, the team at Netflix is about to find out whether their platform can mint a superstar. A few minutes after the evening’s audience takes their seats, the event’s emcee, Chuck Dukas, explains how the show works. Most of the audience seems as confused as I am by his explanation, so allow me to try my own: Star Search is America’s Got Talent in content and Jeopardy! in structure. The show features eight categories, from dance to stand-up to music to magic, four of which are shown in a given episode. Each category has a champion, crowned the best of the previous episode, and a challenger, there to knock the champion off the throne. Both contestants do their thing, the judges each give them a score between one and four, and whoever receives the highest average score advances to the next episode to take on a new challenger. Win enough times, and you’re guaranteed a spot in the season finale, where a single grand champion will win $500,000. Most of the format is borrowed from the original Star Search, which first aired in 1983. It was originally created by Al Masini, a legendary producer who also had a hand in shows from Entertainment Tonight to Baywatch. Masini’s idea was to do an amateur talent show, with carefully chosen acts, presented as glitzily as possible. “We gave them the best lighting available, we gave them the best audio… we put them in a Tiffany setting,” producer Bob Banner said in an interview with the Television Academy years later. “If they didn’t perform well, there’s nothing we could do about it, but we gave them every advantage.” The show ran until 1995 (not including a reboot nobody really remembers), with Ed McMahon as genial host, and featured an astonishing number of now-household names. Britney Spears, Christina Aguilera, Justin Timberlake, Usher, and Beyoncé all sang on the show as young kids; Adam Sandler, Dave Chappelle, and Rosie O’Donnell all competed as comedians; Sharon Stone competed as a “spokesmodel,” a category that mercifully no longer exists, in the show’s very first episode. There is something about the Star Search structure that feels distinctly modern, even four decades after its debut. You can watch a lot of old episodes and clips on YouTube, and it is remarkable just how fast the show moved in those early days. There’s no long windup to each contestant, no heartstring-pulling backstory documentary; there is just a singer and then another singer. An actor and another actor. McMahon slings his arm around the contestant, they get a score, and we’re off to the next one. Ultimately, that format meant that Star Search had nothing else to offer you but performances. Nearly everyone I spoke to said that casting Star Search was the hardest and most important part of the development process, and that while a show like The Voice or American Idol can survive on the strength of its format and cast alone, Star Search lives and dies a performance at a time. “We’re not booking bad acts to get jokes out of it, you know?” says David Friedman, the show’s co-showrunner. “We’re saying, basically, ‘Look, there’s two acts here, we’re looking for a star. Is there one out here that’s a star?’” This turns out to have two distinct advantages for Netflix. The first is a more universal appeal: While senses of humor and cultural references may differ across age groups, borders, and demographics, good singing is good singing to pretty much everyone. “And if you don’t like singing,” Friedman says, “wait five minutes. We’ll have a magician for you.” The second, and maybe more important, is that Star Search functions eerily like a TikTok feed. Look, a dancer! Ooh, magic! Here’s three minutes of stand-up! Time for a cover of your favorite *NSYNC song! Jesse Collins, one of the show’s executive producers, embraces the comparison. “In a way, social media has prepared the world for this,” he says. “You’ll go on TikTok and you’ll fall in love with somebody that does tricks or is a comedian or is a singer or like someone like an IShowSpeed.” The opportunity for Netflix, Collins says, is to do it bigger and better. To take people out of their parking garages, bedrooms, and passenger seats, and put them back in the “Tiffany setting.” Many of the acts on the show’s first season already have large social followings, some in the millions, which seems to stretch the definition of an amateur talent show. But there may be something still uniquely star-making about Netflix. Jeff Gaspin, Netflix’s vice president of unscripted series, says he was convinced of all this as soon as he saw the pitch for the show. A group led by the actress Taraji P. Henson had been developing a reboot for a while, and Gaspin immediately thought it might work for Netflix. But he offered a wrinkle: What if the audience could vote? In the original show, the in-studio audience was occasionally called upon to break the judges’ tie scores, but Gaspin thought maybe the audience should operate as effectively a fourth judge, voting in real time from their couches. And he wanted them to be able to vote with their remotes, right on screen, during the broadcast. Netflix’s product team spent months building a voting system into its apps for mobile devices (where you can vote if you want to), and onto every TV platform and streaming box you can think of. They ran tests with audiences and tested voting in somewhat innocuous places, like a live cooking show hosted by celebrity chef Dave Chang, in which the audience got to rate sandwiches. It all worked, and early testers used it and loved it. So on Star Search, viewers get a pop-up at the end of every performance asking them to rate it from one to five stars, and the average score makes up a quarter of the contestant’s overall rating. Ironically, the only people who don’t get to vote are those of us in the studio audience. We’re under very strict instructions not to use our phones. Going into the evening, I expected the live voting tools to be on everyone’s minds. If not that, then certainly the uniquely global audience potentially tuning in. But the cast and crew actually seemed more worried about another very Netflix-y thing: This edition of Star Search has no commercial breaks. Commercials, see, are a godsend to the cast and crew. “That’s the time you spend two and a half minutes going, Okay, where are we going next?” Friedman says. “And saying, Oh, the tape’s not ready? Here’s what we’re going to do.” Breaks also give a show its shape, and without them, “there’s no natural ebb and flow anymore,” he says. “It’s just going to have to go.” Ahead of the first show, Friedman has been chatting with Anthony Anderson, the Black-ish star who is Star Search’s new host. “I’m in his ear, and I’m like, ‘Look, you’re on live TV with no commercials. If something is not working for you, own it,’” Friedman says. “We have passed the point in which you have to hide stuff — viewers like to be brought in on the ride.” Part of making live TV, multiple people told me, is being able to embrace the chaos. On this Tuesday evening, at least, there is surprisingly little chaos. With about an hour to spare before the 6PM live start, emcee Chuck announces that we’re going to pretape the magic portion of the competition, for reasons no one ever explains. (This eventually causes some conspiracy theories, as viewers of the “live” show notice some clues that they’re seeing old footage.) Both competitors’ tricks go off without a hitch, everyone goes backstage, and Anderson comes back out to practice his intro to the show. Then he’s banished with three and a half minutes to air, and all that’s left to do is wait. Precisely at the top of the hour, Netflix’s iconic “tudum” sound thunders through the room, Anderson walks out, and the live show commences. Anderson explains to the camera how the show works, has the audience vote on his suit to practice, and kicks off the proceedings. The rest of the show is a blur of singers (Friedman told me just ahead of air that two young kids would be going first, and that yes, he knows it’s kind of cheating to use adorable children to make people like your show), dancers, magicians, and musical groups. There were, of course, plenty of things to work on. The three celebrity judges — Sarah Michelle Gellar, Jelly Roll, and Chrissy Teigen — were all far too nice to the contestants, even when they gave them bad scores. They also all talked way too much. The sound mixing, particularly on the groups, didn’t always actually mix. At one point, Anderson was visibly confused by the teleprompter, which was instructing him to look in three different places for three different scoring numbers. There were some solid performances, but no obvious stars, and a couple of genuine stinkers. Mostly, the whole thing took too long. This supposedly 60-minute show lasted 90. Still, the voting worked, the stream stayed live, and the whole room seemed to exhale as the confetti fell and the lights turned off. This was just the first night. There were seven more to do in the next four weeks, plus a grand finale episode on February 17th. Judging from the early reactions, Star Search wasn’t quite a cultural phenomenon yet, but Friedman says he knows even Netflix can’t create a hit all at once. “We do this for five weeks and hopefully it builds, and hopefully the excitement builds, and we get to the end and there’ll be one champion — and hopefully you discover the next somebody.” Everybody studiously avoids jinxing the next season, because Netflix is famous for canceling and forgetting about projects it deems not to be working. But it’s clear the company hopes this was the first night of many in this studio. Just as we’re all filing out, heading back into a mild LA evening, Dukas the emcee booms back out over the audience telling everyone to take their seats. “Okay!” he says. “We’re going to do the show again in Spanish, and then we’ll get you out of here.” Posts from this author will be added to your daily email digest and your homepage feed. See All by David Pierce Posts from this topic will be added to your daily email digest and your homepage feed. See All Creators Posts from this topic will be added to your daily email digest and your homepage feed. See All Netflix Posts from this topic will be added to your daily email digest and your homepage feed. See All Streaming Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All TikTok Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Streaming This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEStockdale199540_187-0] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://www.mako.co.il/hix-feed/2026-m02_w03/shorts-a648c59ea4f7c91027.htm] | [TOKENS: 226] |
הוא החליט לתת לכלב שלו סודה. הכלב, תתפלאו לשמוע, פחות התחברהוא החליט לתת לכלב שלו סודה. הכלב, תתפלאו לשמוע, פחות התחבר21.02.2026 הוא החליט לתת לכלב שלו סודה. הכלב, תתפלאו לשמוע, פחות התחבר הוא החליט לתת לכלב שלו סודה. הכלב, תתפלאו לשמוע, פחות התחבר |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Rupert_Murdoch] | [TOKENS: 11254] |
Contents Rupert Murdoch Keith Rupert Murdoch (/ˈmɜːrdɒk/ MUR-dok; born 11 March 1931) is an Australian and American former business magnate, investor, and media mogul. Through his company News Corp, he is the owner of hundreds of local, national, and international publishing outlets around the world, including in the United Kingdom (The Sun and The Times), in Australia (The Daily Telegraph, Herald Sun, and The Australian), in the United States (The Wall Street Journal and the New York Post), book publisher HarperCollins, and the television broadcasting channels Sky News Australia and Fox News (through Fox Corporation). He was also the owner of Sky (until 2018), 21st Century Fox (until 2019), and the now-defunct News of the World. With a net worth of US$21.7 billion as of 2 March 2022,[update] Murdoch is the 31st richest person in the United States and the 71st richest in the world according to Forbes magazine. Due to his extensive wealth and influence over media and politics, Murdoch has been described as an oligarch. After his father Keith Murdoch died in 1952, Murdoch took over the running of The News, a small Adelaide newspaper owned by his father. In the 1950s and 1960s, Murdoch acquired a number of newspapers in Australia and New Zealand before expanding into the United Kingdom in 1969, taking over the News of the World, followed closely by The Sun. In 1974, Murdoch moved to New York City, to expand into the American market; however, he retained interests in Australia and the United Kingdom. In 1981, Murdoch bought The Times, his first British broadsheet, and, in 1985, became a naturalized American citizen, giving up his Australian citizenship, to satisfy the legal requirement for American television network ownership. In 1986, keen to adopt newer electronic publishing technologies, Murdoch consolidated his British printing operations in London, causing bitter industrial disputes. His holding company News Corporation acquired Twentieth Century Fox (1985), HarperCollins (1989), and The Wall Street Journal (2007). Murdoch formed the British broadcaster BSkyB in 1990 and, during the 1990s, expanded into Asian networks and South American television. By 2000, Murdoch's News Corporation owned more than 800 companies in more than 50 countries, with a net worth of more than $5 billion. In July 2011, Murdoch faced allegations that his companies, including the News of the World, owned by News Corporation, had been regularly hacking the phones of celebrities, royalty, and public citizens. Murdoch faced police and government investigations into bribery and corruption by the British government and FBI investigations in the United States. On 21 July 2012, Murdoch resigned as a director of News International. In September 2023, Murdoch announced he would be stepping down as chairman of Fox Corp. and News Corp. Many of Murdoch's papers and television channels have been accused of right-wing bias and misleading coverage to support his business interests and political allies, and some have linked his influence with major political developments in the United Kingdom, United States and Australia. The Murdoch family was involved in a U.S. court case in which his three children—Elisabeth, Prudence, and James—challenged his bid to amend the family trust to ensure that his eldest son, Lachlan, retains control of News Corp and Fox Corp, rather than the trust benefiting all of his six children, as is specified in its "irrevocable" terms. In September 2025, they reached a settlement, giving Lachlan ownership of Murdoch's media empire. Early life and education Keith Rupert Murdoch was born on 11 March 1931 in Melbourne, Victoria, Australia, the second of four children of Sir Keith Arthur Murdoch (1885–1952) and Dame Elisabeth Joy (née Greene; 1909–2012).: 9 He is of English, Irish and Scottish ancestry. His parents were also born in Melbourne. Murdoch's father was a war correspondent and later a regional newspaper magnate; he owned two newspapers in Adelaide, South Australia and a radio station in a remote mining town and was chairman of the Herald and Weekly Times publishing company.: 16 Murdoch has three sisters: Helen (1929–2004), Anne (born 1935) and Janet (born 1939).: 47 His paternal grandfather, Patrick John Murdoch, was a Scottish-born Presbyterian minister who emigrated to Australia in 1884. Murdoch attended Geelong Grammar School, where he was co-editor of the school's official journal The Corian and editor of the student journal If Revived. He then studied philosophy, politics and economics in England at Worcester College, Oxford, where he kept a bust of Lenin in his rooms and came to be known as "Red Rupert". He was a member of the Oxford University Labour Party,: 34 stood for secretary of the Labour Club and managed Oxford Student Publications Limited, the publishing house of Cherwell. After his father's death from cancer in 1952, Murdoch's mother did charity work as the life governor of the Royal Women's Hospital in Melbourne and established the Murdoch Children's Research Institute; at the age of 102 (in 2011), she had 74 descendants. While his father was alive, Murdoch worked part-time at the Melbourne Herald and was groomed by his father to take over the family business. After his father's death, he began working as a sub-editor with the Daily Express for two years. Activities in Australia and New Zealand Following his father's death, when he was 21, Murdoch returned from Oxford to take charge of what was left of the family business. After liquidation of his father's Herald stake to pay taxes, what was left was News Limited, which had been established in 1923.: 16 Rupert Murdoch turned its Adelaide newspaper, The News, its main asset, into a major success. He began to direct his attention to acquisition and expansion, buying the troubled Sunday Times in Perth, Western Australia (1956) and over the next few years acquiring suburban and provincial newspapers in New South Wales, Queensland, Victoria and the Northern Territory, including the Sydney afternoon tabloid The Daily Mirror (1960). The Economist describes Murdoch as "inventing the modern tabloid", as he developed a pattern for his newspapers, increasing sports and scandal coverage and adopting eye-catching headlines. Murdoch's first foray outside Australia involved the purchase of a controlling interest in the New Zealand daily The Dominion. In January 1964, while touring New Zealand with friends in a rented Morris Minor after sailing across the Tasman, Murdoch read of a takeover bid for the Wellington paper by the British-based Canadian newspaper magnate Lord Thomson of Fleet. On the spur of the moment, he launched a counter-bid. A four-way battle for control ensued in which the 32-year-old Murdoch was ultimately successful. Later in 1964, Murdoch launched The Australian, Australia's first national daily newspaper, which was based first in Canberra and later in Sydney. In 1972, Murdoch acquired the Sydney morning tabloid The Daily Telegraph from Australian media mogul Sir Frank Packer, who later regretted selling it to him. In 1984, Murdoch was appointed Companion of the Order of Australia (AC) for services to publishing. After the Keating government relaxed media ownership laws, in 1986 Murdoch launched a takeover bid for The Herald and Weekly Times, which was the largest newspaper publisher in Australia. There was a three-way takeover battle between Murdoch, Fairfax and Robert Holmes à Court, with Murdoch succeeding after agreeing to some divestments. In 1999, Murdoch significantly expanded his music holdings in Australia by acquiring the controlling share in a leading Australian independent label, Michael Gudinski's Mushroom Records; he merged that with Festival Records, and the result was Festival Mushroom Records (FMR). Both Festival and FMR were managed by Murdoch's son James Murdoch for several years. Murdoch found a political ally in Sir John McEwen, leader of the Australian Country Party (now known as the National Party of Australia), who was governing in coalition with the larger Menzies-Holt-Gorton Liberal Party. From the first issue of The Australian, Murdoch began taking McEwen's side in every issue that divided the long-serving coalition partners. (The Australian, 15 July 1964, first edition, front page: "Strain in Cabinet, Liberal-CP row flares.") It was an issue that threatened to split the coalition government and open the way for the stronger Australian Labor Party to dominate Australian politics. It was the beginning of a long campaign that served McEwen well. After McEwen and Menzies retired, Murdoch threw his growing power behind the Australian Labor Party under the leadership of Gough Whitlam and duly saw it elected on a social platform that included universal free health care, free education for all Australians to tertiary level, recognition of the People's Republic of China, and public ownership of Australia's oil, gas and mineral resources. Rupert Murdoch's backing of Whitlam turned out to be brief. Murdoch had already started his short-lived National Star newspaper in America, and was seeking to strengthen his political contacts there. Asked about the 2007 Australian federal election at News Corporation's annual general meeting in New York on 19 October 2007, its chairman Rupert Murdoch said: "I am not commenting on anything to do with Australian politics. I'm sorry. I always get into trouble when I do that." Pressed as to whether he believed Prime Minister John Howard should continue as prime minister, he said: "I have nothing further to say. I'm sorry. Read our editorials in the papers. It'll be the journalists who decide that – the editors." Murdoch described Howard's successor, Labor Party Prime Minister Kevin Rudd, as "more ambitious to lead the world [in tackling climate change] than to lead Australia" and criticised Rudd's expansionary fiscal policies in the wake of the 2008 financial crisis as unnecessary. In 2009, in response to accusations by Rudd that News Limited was running vendettas against him and his government, Murdoch opined that Rudd was "oversensitive". Although News Limited's interests are extensive, also including the Daily Telegraph, the Courier-Mail and the Adelaide Advertiser, it was suggested by the commentator Mungo MacCallum in The Monthly that "the anti-Rudd push, if coordinated at all, was almost certainly locally driven" as opposed to being directed by Murdoch, who also took a different position from local editors on such matters as climate change and stimulus packages to mitigate the effects of the 2008 financial crisis. Murdoch is a supporter of the formation of an Australian republic, having campaigned for such a change during the 1999 referendum. Activities in the United Kingdom In 1968, Murdoch entered the British newspaper market with his acquisition of the populist News of the World, followed in 1969 with the purchase of the struggling daily The Sun from IPC. Murdoch turned The Sun into a tabloid format and reduced costs by using the same printing press for both newspapers. On acquiring it, he appointed Albert 'Larry' Lamb as editor and – Lamb recalled later – told him: "I want a tearaway paper with lots of tits in it". In 1997 The Sun attracted 10 million daily readers. In 1981, Murdoch acquired the struggling Times and Sunday Times from Canadian newspaper publisher Lord Thomson of Fleet. Ownership of The Times came to him through his relationship with Lord Thomson, who had grown tired of losing money on it as a result of an extended period of industrial action that stopped publication. In the light of success and expansion at The Sun the owners believed that Murdoch could turn the papers around. Harold Evans, editor of the Sunday Times from 1967, was switched to the daily Times, though he stayed only a year amid editorial conflict with Murdoch. During the 1980s and early 1990s, Murdoch's publications were generally supportive of Britain's Prime Minister Margaret Thatcher. At the end of the Thatcher/Major era, Murdoch switched his support to the Labour Party and its leader, Tony Blair. The closeness of his relationship with Blair and their secret meetings to discuss national policies was to become a political issue in Britain. This later changed, with The Sun, in its English editions, publicly renouncing the ruling Labour government and lending its support to David Cameron's Conservative Party, which soon afterwards formed a coalition government. In Scotland, where the Conservatives had suffered a complete annihilation in 1997, the paper began to endorse the Scottish National Party (though not yet its flagship policy of independence), which soon after came to form the first-ever outright majority in the proportionally elected Scottish Parliament. Former Prime Minister Gordon Brown's official spokesman said in November 2009 that Brown and Murdoch "were in regular communication" and that "there is nothing unusual in the prime minister talking to Rupert Murdoch". In 1986, Murdoch introduced electronic production processes to his newspapers in Australia, Britain and the United States. The greater degree of automation led to significant reductions in the number of employees involved in the printing process. In England, the move roused the anger of the print unions, resulting in a long and often violent dispute that played out in Wapping, one of London's docklands areas, where Murdoch had installed the very latest electronic newspaper purpose-built publishing facility in an old warehouse. The bitter Wapping dispute started with the dismissal of 6,000 employees who had gone on strike and resulted in street battles and demonstrations. Many on the political left in Britain alleged the collusion of Margaret Thatcher's Conservative government with Murdoch in the Wapping affair, as a way of damaging the British trade union movement. In 1987, the dismissed workers accepted a settlement of £60 million. In 1998, Murdoch made an attempt to buy the football club Manchester United F.C., with an offer of £625 million, but this failed. It was the largest amount ever offered for a sports club. It was blocked by the United Kingdom's Competition Commission, which stated that the acquisition would have "hurt competition in the broadcast industry and the quality of British football". Murdoch's British-based satellite network, Sky Television, incurred massive losses in its early years of operation. As with many of his other business interests, Sky was heavily subsidised by the profits generated by his other holdings, but convinced rival satellite operator British Satellite Broadcasting to accept a merger on his terms in 1990. The merged company, BSkyB, has dominated the British pay-TV market ever since, pursuing direct to home (DTH) satellite broadcasting. By 1996, BSkyB had more than 3.6 million subscribers, triple the number of cable customers in the UK. Murdoch has a seat on the Strategic Advisory Board of Genie Oil and Gas, having jointly invested with Lord Rothschild in a 5.5% stake in the company which conducted shale gas and oil exploration in Colorado, Mongolia, Israel, and the occupied Golan Heights. In response to print media's decline and the increasing influence of online journalism during the 2000s, Murdoch proclaimed his support of the micropayments model for obtaining revenue from online news, although this has been criticised by some. In January 2018, the CMA blocked Murdoch from taking over the remaining 61% of BSkyB he did not already own, over fear of market dominance that could potentialise censorship of the media. His bid for BSkyB was later approved by the CMA as long as he sold Sky News to the Walt Disney Company, which was already set to acquire 21st Century Fox. However, it was Comcast who won control of BSkyB in a blind auction ordered by the CMA. Murdoch ultimately sold his 39% of BSkyB to Comcast. News Corporation has subsidiaries in the Bahamas, the Cayman Islands, the Channel Islands and the Virgin Islands. From 1986, News Corporation's annual tax bill averaged around seven percent of its profits. In Britain, in the 1980s, Murdoch formed a close alliance with Conservative prime minister Margaret Thatcher. In February 1981, when Murdoch, already owner of The Sun and The News of the World, sought to buy The Times and The Sunday Times, Thatcher's government let his bid pass without referring it to the Monopolies and Mergers Commission, which was usual practice at the time. Although contact between the two before this point had been explicitly denied in an official history of The Times, documents found in Thatcher's archives in 2012 revealed a secret meeting had taken place a month before in which Murdoch briefed Thatcher on his plans for the paper, such as taking on trade unions. The Sun credited itself with helping her successor John Major to win an unexpected election victory in the 1992 general election, which had been expected to end in a hung parliament or a narrow win for Labour, then led by Neil Kinnock. In the general elections of 1997, 2001 and 2005, Murdoch's papers were either neutral or supported Labour under Tony Blair.[citation needed] The Labour Party, from when Blair became leader in 1994, had moved from the centre-left to a more centrist position on many economic issues before 1997. Murdoch identifies himself as a libertarian, saying "What does libertarian mean? As much individual responsibility as possible, as little government as possible, as few rules as possible. But I'm not saying it should be taken to the absolute limit." In a speech he delivered in New York in 2005, Murdoch claimed that Blair described the BBC coverage of the Hurricane Katrina disaster, which was critical of the Bush administration's response, as full of hatred of America. On 28 June 2006, the BBC reported that Murdoch and News Corporation were considering backing new Conservative leader David Cameron at the next General Election – still up to four years away. In a later interview in July 2006, when he was asked what he thought of the Conservative leader, Murdoch replied "Not much". In a 2009 blog, it was suggested that in the aftermath of the News of the World phone hacking scandal, which might yet have transatlantic implications, Murdoch and News Corporation might have decided to back Cameron. Despite this, there had already been a convergence of interests between the two men over the muting of Britain's communications regulator Ofcom. In August 2008, Cameron accepted free flights to hold private talks and attend private parties with Murdoch on his yacht, the Rosehearty. Cameron declared in the Commons register of interests he accepted a private plane provided by Murdoch's son-in-law, public relations guru Matthew Freud; Cameron did not reveal his talks with Murdoch. The gift of travel in Freud's Gulfstream IV private jet was valued at around £30,000. Other guests attending the "social events" included the then EU trade commissioner Peter Mandelson, the Russian oligarch Oleg Deripaska and co-chairman of NBC Universal Ben Silverman. The Conservatives did not disclose what was discussed. In July 2011, it emerged that Cameron had met key executives of Murdoch's News Corporation a total of 26 times during the 14 months that Cameron had served as prime minister up to that point. It was also reported that Murdoch had given Cameron a personal guarantee that there would be no risk attached to hiring Andy Coulson, the former editor of News of the World, as the Conservative Party's communication director in 2007. This was in spite of Coulson having resigned as editor over phone hacking by a reporter. Cameron chose to take Murdoch's advice, despite warnings from the deputy prime minister, Nick Clegg, Lord Ashdown and The Guardian. Coulson resigned his post in 2011 and was later arrested and questioned on allegations of further criminal activity at the News of the World, specifically the phone hacking scandal. As a result of the subsequent trial, Coulson was sentenced to 18 months in jail. In June 2016, The Sun supported Vote Leave in the United Kingdom European Union membership referendum. Murdoch called the Brexit result "wonderful", comparing the decision to withdraw from the EU to "a prison break ... we're out". Anthony Hilton, economics editor for the Evening Standard but describing a period when he interviewed Murdoch for The Guardian, quoted Murdoch as justifying his Euroscepticism with the words "When I go into Downing Street, they do what I say; when I go to Brussels, they take no notice." Murdoch denied saying this later in a letter to The Guardian. With some exceptions, The Sun had generally been supportive of the government of Conservative prime minister Boris Johnson. Murdoch and his employees were the media representatives ministers from the Cabinet and Treasury most frequently held meetings during the first two years of Johnson's government. However, newspaper circulation in general including among subsidiaries of News International fell sharply in the United Kingdom during the early 21st century, leading some commentators to suggest that Murdoch was not as influential in British political debate by the early 2020s as he had once been. News International phone hacking scandal In July 2011, Murdoch, along with his youngest son James, provided testimony before a British parliamentary committee regarding phone hacking. In the UK, his media empire came under fire, as investigators probed reports of 2011 phone hacking. On 14 July 2011 the Culture, Media and Sport Committee of the House of Commons served a summons on Murdoch, his son James, and his former CEO Rebekah Brooks to testify before a committee five days later. After an initial refusal, the Murdochs confirmed they would attend, after the committee issued them a summons to Parliament. The day before the committee, the website of the News Corporation publication The Sun was hacked, and a false story was posted on the front page claiming that Murdoch had died. Murdoch described the day of the committee "the most humble day of my life". He argued that since he ran a global business of 53,000 employees and that News of the World was "just 1%" of this, he was not ultimately responsible for what went on at the tabloid. He added that he had not considered resigning, and that he and the other top executives had been completely unaware of the hacking. On 15 July, Murdoch attended a private meeting in London with the family of Milly Dowler, where he personally apologised for the hacking of their murdered daughter's voicemail by a company he owns. On 16 and 17 July, News International published two full-page apologies in many of Britain's national newspapers. The first apology took the form of a letter, signed by Murdoch, in which he said sorry for the "serious wrongdoing" that occurred. The second was titled "Putting right what's gone wrong", and gave more detail about the steps News International was taking to address the public's concerns. In the wake of the allegations, Murdoch accepted the resignations of Brooks and Les Hinton, head of Dow Jones who was chairman of Murdoch's British newspaper division when some of the abuses happened. They both deny any knowledge of any wrongdoing under their command. On 27 February 2012, the day after the first issue of The Sun on Sunday was published, Deputy Assistant Commissioner Sue Akers informed the Leveson Inquiry that police are investigating a "network of corrupt officials" as part of their inquiries into phone hacking and police corruption. She said that evidence suggested a "culture of illegal payments" at The Sun and that these payments allegedly made by The Sun were authorised at a senior level. In testimony on 25 April, Murdoch did not deny the quote attributed to him by his former editor of The Sunday Times, Harold Evans: "I give instructions to my editors all round the world, why shouldn't I in London?" On 1 May 2012, the Culture, Media and Sport Committee issued a report stating that Murdoch was "not a fit person to exercise the stewardship of a major international company". On 3 July 2013, the Exaro website and Channel 4 News broke the story of a secret recording. This was recorded by The Sun journalists, and in it Murdoch can be heard telling them that the whole investigation was one big fuss over nothing, and that he, or his successors, would take care of any journalists who went to prison. He said: "Why are the police behaving in this way? It's the biggest inquiry ever, over next to nothing." Activities in the United States Murdoch made his first acquisition in the United States in 1973, when he purchased the San Antonio Express-News. In 1974, Murdoch moved to New York City, to expand into the American market; however, he retained interests in Australia and Britain. Soon afterwards, he founded Star, a supermarket tabloid, and in 1976, he purchased the New York Post. On January 3, 1977, Murdoch along with columnist James Brady, founded Page Six, gossip column of the New York Post. On 4 September 1985, Murdoch became a naturalized citizen to satisfy the legal requirement that only American citizens were permitted to own American television stations. In March 1984, Marvin Davis sold Marc Rich's interest in 20th Century Fox to Murdoch for $250 million due to Rich's trade deals with Iran, which were sanctioned by the US at the time. Davis later backed out of a deal with Murdoch to purchase John Kluge's Metromedia television stations. Rupert Murdoch bought the stations by himself, without Marvin Davis, and later bought out Davis's remaining stake in Fox for $325 million. The six television stations owned by Metromedia formed the nucleus of the Fox Broadcasting Company, founded on 9 October 1986, which later had great success with programs including The Simpsons and The X-Files. In 1986, Murdoch bought Misty Mountain, a Wallace Neff designed house on Angelo Drive in Beverly Hills. The house was the former residence of Jules C. Stein. Murdoch sold the house to his son James in 2018. In 1987, Murdoch created his global television special, the World Music Video Awards, a special music ceremony award where winners were chosen by viewers in eight countries. In Australia, during 1987, he bought The Herald and Weekly Times Ltd., the company that his father had once managed. Rupert Murdoch's 20th Century Fox bought out the remaining assets of Four Star Television from Ronald Perelman's Compact Video in 1996. Most of Four Star Television's library of programs are controlled by 20th Century Fox Television today. After Murdoch's numerous buyouts during the buyout era of the eighties, News Corporation had built up financial debts of $7 billion (much from Sky TV in the UK), despite the many assets that were held by NewsCorp. The high levels of debt caused Murdoch to sell many of the American magazine interests he had acquired in the mid-1980s. In 1993, Murdoch's Fox Network took exclusive coverage of the National Football Conference (NFC) of the National Football League (NFL) from CBS and increased programming to seven days a week. In 1995, Fox became the object of scrutiny from the Federal Communications Commission (FCC), when it was alleged that News Ltd.'s Australian base made Murdoch's ownership of Fox illegal. However, the FCC ruled in Murdoch's favour, stating that his ownership of Fox was in the best interests of the public. That same year, Murdoch announced a deal with MCI Communications to develop a major news website and magazine, The Weekly Standard. Also that year, News Corporation launched the Foxtel pay television network in Australia in partnership with Telstra. In 1996, Murdoch decided to enter the cable news market with the Fox News Channel, a 24-hour cable news station. Ratings studies released in 2009 showed that the network was responsible for nine of the top ten programs in the "Cable News" category at that time. Rupert Murdoch and Ted Turner (founder and former owner of CNN) are long-standing rivals. In late 2003, Murdoch acquired a 34% stake in Hughes Electronics, the operator of the largest American satellite TV system, DirecTV, from General Motors for $6 billion (USD). His Fox movie studio had global hits with Titanic and Avatar. In 2004, Murdoch announced that he was moving News Corporation headquarters from Adelaide, Australia to the United States. Choosing an American domicile was designed to ensure that American fund managers could purchase shares in the company, since many were deciding not to buy shares in non-American companies. On 20 July 2005, News Corporation bought Intermix Media Inc., which held Myspace, Imagine Games Network and other social networking-themed websites, for US$580 million, making Murdoch a major player in online media concerns. In June 2011, it sold off Myspace for US$35 million. On 11 September 2005, News Corporation announced that it would buy IGN Entertainment for $650 million (USD). In May 2007, Murdoch made a $5 billion offer to purchase Dow Jones & Company. At the time, the Bancroft family, who had owned Dow Jones & Company for 105 years and controlled 64% of the shares at the time, declined the offer. Later, the Bancroft family confirmed a willingness to consider a sale. Besides Murdoch, the Associated Press reported that supermarket magnate Ron Burkle and Internet entrepreneur Brad Greenspan were among the other interested parties. In 2007, Murdoch acquired Dow Jones & Company, which gave him such publications as The Wall Street Journal, Barron's Magazine, the Far Eastern Economic Review (based in Hong Kong) and SmartMoney. In June 2014, Murdoch's 21st Century Fox made a bid for Time Warner at $85 per share in stock and cash ($80 billion total) which Time Warner's board of directors turned down in July. Warner's CNN unit would have been sold to ease antitrust issues of the purchase. On 5 August 2014 the company announced it had withdrawn its offer for Time Warner, and said it would spend $6 billion buying back its own shares over the following 12 months. Murdoch left his post as CEO of 21st Century Fox in 2015 but continued to own the company until it was purchased by Disney in 2019. A number of television broadcasting assets were spun off into the Fox Corporation before the acquisition and are still owned by Murdoch. This includes Fox News, of which Murdoch was acting CEO from 2016 until 2019, following the resignation of Roger Ailes due to accusations of sexual harassment. Murdoch considered merging News Corp and Fox Corporation, but in January 2023 announced to the board that he had withdrawn the idea, stating that he and his son Lachlan had "determined that a combination [was] not optimal for shareholders of News Corp and FOX" at that time. The Special Committee of the Board of Directors of News Corp that had been set up to investigate the matter was dissolved. In September 2023, Rupert Murdoch retired, and handed over the leadership of his businesses to his eldest son Lachlan. McKnight (2010) identifies four characteristics of his media operations: free market ideology; unified positions on matters of public policy; global editorial meetings; and opposition to liberal bias in other public media. In The New Yorker, Ken Auletta writes that Murdoch's support for Edward I. Koch while he was running for mayor of New York "spilled over onto the news pages of the Post, with the paper regularly publishing glowing stories about Koch and sometimes savage accounts of his four primary opponents." According to The New York Times, Ronald Reagan's campaign team credited Murdoch and the Post for his victory in New York in the 1980 United States presidential election. Reagan later "waived a prohibition against owning a television station and a newspaper in the same market," allowing Murdoch to continue to control The New York Post and The Boston Herald while expanding into television. On 8 May 2006, the Financial Times reported that Murdoch would be hosting a fund-raiser for Senator Hillary Clinton's (D-New York) Senate re-election campaign. In a 2008 interview with Walt Mossberg, Murdoch was asked whether he had "anything to do with the New York Post's endorsement of Barack Obama in the democratic primaries". Without hesitating, Murdoch replied, "Yeah. He is a rock star. It's fantastic. I love what he is saying about education. I don't think he will win Florida [...] but he will win in Ohio and the election. I am anxious to meet him. I want to see if he will walk the walk." In 2010, News Corporation gave US$1 million to the Republican Governors Association and $1 million to the US Chamber of Commerce. Murdoch also served on the board of directors of the libertarian Cato Institute. Murdoch is also a supporter of the Stop Online Piracy Act and Protect Intellectual Property Act. Murdoch was reported in 2011 as advocating more open immigration policies in western nations generally. In the United States, Murdoch and chief executives from several major corporations, including Hewlett-Packard, Boeing and Disney joined New York City Mayor Michael Bloomberg to form the Partnership for a New American Economy to advocate "for immigration reform – including a path to legal status for all illegal aliens now in the United States". The coalition, reflecting Murdoch and Bloomberg's own views, also advocates significant increases in legal immigration to the United States as a means of boosting America's sluggish economy and lowering unemployment. The Partnership's immigration policy prescriptions are notably similar to those of the Cato Institute and the US Chamber of Commerce — both of which Murdoch has supported in the past. The Wall Street Journal editorial page has similarly advocated for increased legal immigration, in contrast to the staunch anti-immigration stance of Murdoch's British newspaper, The Sun. On 5 September 2010, Murdoch testified before the House Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law Membership on the "Role of Immigration in Strengthening America's Economy". In his testimony, Murdoch called for ending mass deportations and endorsed a "comprehensive immigration reform" plan that would include a pathway to citizenship for all illegal immigrants. In the 2012 United States presidential election, Murdoch was critical of the competence of Mitt Romney's team but was nonetheless strongly supportive of a Republican victory, tweeting: "Of course I want him [Romney] to win, save us from socialism, etc." In October 2015, Murdoch stirred controversy when he praised Republican presidential candidate Ben Carson and referenced President Barack Obama, tweeting, "Ben and Candy Carson terrific. What about a real black President who can properly address the racial divide? And much else." After which he apologised, tweeting, "Apologies! No offence meant. Personally find both men charming." During Donald Trump's term as US President Murdoch showed support for him through the news stories broadcast in his media empire, including on Fox News. In early 2018, Mohammad bin Salman, the crown prince of Saudi Arabia, had an intimate dinner at Murdoch's Bel Air estate in Los Angeles. Murdoch is a strong supporter of Israel and its domestic policies. In October 2010, the Anti-Defamation League in New York City presented Murdoch with its International Leadership Award "for his stalwart support of Israel and his commitment to promoting respect and speaking out against antisemitism". However, in April 2021, in a letter to Lachlan Murdoch, ADL director Jonathan Greenblatt wrote that it would no longer make such an award to his father. This was in the immediate context of accusations made by the ADL against Fox News presenter Tucker Carlson and his apparent espousal of the White replacement theory. In 2023, during a defamation lawsuit by Dominion Voting Systems against Fox News, Murdoch acknowledged that some Fox News commentators were endorsing election fraud claims they knew were false. On 18 April 2023, Fox and Dominion settled for $787.5 million. In July 2025, Murdoch was sued by Trump for $10 billion along with The Wall Street Journal's parent company Dow Jones over an article claiming that a birthday greeting containing Trump's name was sent to Jeffrey Epstein in 2003. Activities in Europe Murdoch owns a controlling interest in Sky Italia, a satellite television provider in Italy. Murdoch's business interests in Italy have been a source of contention since they began. In 2009 Murdoch won a media dispute with then Italian prime minister Silvio Berlusconi. A judge ruled the then prime minister's media arm Mediaset prevented News Corporation's Italian unit, Sky Italia, from buying advertisements on its television networks. Activities in Asia In November 1986, News Corporation purchased a 35% stake in the South China Morning Post group for about US$105 million. At that time, SCMP group was a stock-listed company, and was owned by HSBC, Hutchison Whampoa and Dow Jones & Company. In December 1986, Dow Jones & Company offered News Corporation to sell about 19% of share it owned of SCMP for US$57.2 million, and, by 1987, News Corporation completed the full takeover. In September 1993, News Corporation have agreed to sell a 34.9% share in SCMP to Robert Kuok's Kerry Media for US$349 million. In 1994, News Corporation sold the remaining 15.1% share in SCMP to MUI Group, disposing the Hong Kong newspaper.[better source needed] In June 1993, News Corporation attempted to acquire a 22% share in TVB, a terrestrial television broadcaster in Hong Kong, for about $237 million, but Murdoch's company gave up, as the Hong Kong government would not relax the regulation regarding foreign ownership of broadcasting companies. In 1993, News Corporation acquired Star TV (renamed as Star in 2001), a Hong Kong company headed by Richard Li, from Hutchison Whampoa for $1 billion (Souchou, 2000:28), and subsequently set up offices for it throughout Asia. The deal enabled News International to broadcast from Hong Kong to India, China, Japan, and over thirty other countries in Asia, becoming one of the biggest satellite television networks in the east; however, the deal did not work out as Murdoch had planned because the Chinese government placed restrictions on it that prevented it from reaching most of China.[citation needed] In 2009, News Corporation reorganised Star; a few of these arrangements were that the original company's operations in East Asia, Southeast Asia and the Middle East were integrated into Fox International Channels, and Star India was spun-off (but still within News Corporation). Succession court case (2024) As of December 2024[update], the whole Murdoch family is involved in a court case in Reno, Nevada, in which James, his sister Elisabeth and half-sister Prudence are challenging their father's bid to amend the family trust to ensure that his eldest son, Lachlan, retains control of News Corp and Fox Corp, rather than benefiting all of his six children, as is specified in the "irrevocable" terms of the trust. According to The New York Times, Murdoch Senior wants his companies to remain politically conservative, and sees his other children as too politically liberal. The irrevocable family trust was set up after Rupert and Anna Murdoch's divorce in 1999, to hold the family's 28.5% stake in News Corp. It relates only to the children born before then, giving them equal say in the fate of the business after Rupert's death. Chloe and Grace Murdoch, Rupert's children with third wife Wendi Deng, will have no say in the business, although will share the stock proceeds. The case follows Rupert's attempt to change the trust in 2023, and the Nevada probate commissioner's finding that he was allowed to amend the trust "if he is able to show he is acting in good faith and for the sole benefit of his heirs". Rupert Murdoch is arguing interference by the other siblings would cause a financial loss to Fox, and therefore "in their own best interests if they have their votes taken away from them". He argues that preserving the outlet's conservative editorial stance against interference by the more politically moderate siblings would better protect its commercial value. The case has led to the three children becoming estranged from their father, with none of them attending his wedding to his fifth wife, Elena Zhukova, in June 2024. On 9 September 2025, News Corp announced changes to the structure of the trust which controls the family's ownership of the company and Fox News. As part of the changes, a partial share sale by the family trust was agreed, reported to be worth $3.3 billion, for Prudence MacLeod, Elisabeth Murdoch and James Murdoch's shares in News Corp and Fox News. All three siblings will cease to be beneficiaries of any holdings in News Corp and Fox and will no longer have any voting rights. As part of the new structure, Murdoch's eldest son and chairman of News Corp, Lachlan Murdoch, will assume full control of the family trust which owns both companies, including full voting rights. It was announced that Murdoch's younger children, Grace Murdoch and Chloe Murdoch, will join as beneficiaries of the trust with no voting rights. Personal life In 2003, Murdoch bought "Rosehearty", an 11 bedroom home on a 5-acre waterfront estate in Centre Island, New York. In May 2013, he purchased the Moraga Estate, an estate, vineyard and winery in Bel Air, Los Angeles, California. In 2019, Murdoch and his new wife Jerry Hall purchased Holmwood, an 18th-century house and estate in the English village of Binfield Heath, some 4 miles (6.4 km) north-east of Reading. In late 2020, during the COVID-19 pandemic, it was reported that Murdoch and Hall had been isolating in their Binfield Heath home for much of the year. He received his first COVID-19 vaccine in nearby Henley-on-Thames on 16 December. In 1956, Murdoch married Patricia Booker, a former shop assistant and flight attendant from Melbourne; the couple had their only child, Prudence, in 1958. They divorced in 1967. In 1967, Murdoch married Anna Torv, a Scottish-born cadet journalist working for his Sydney newspaper The Daily Mirror. In January 1998, three months before the announcement of his separation from Anna, a Roman Catholic, Murdoch was made a Knight Commander of the Order of Saint Gregory the Great (KSG), a papal honour awarded by Pope John Paul II. While Murdoch would often attend Mass with Torv, he never converted to Catholicism. Torv and Murdoch had three children: Elisabeth Murdoch (born in Sydney, Australia on 22 August 1968), Lachlan Murdoch (born in London, UK on 8 September 1971), and James Murdoch, (born in London on 13 December 1972). Murdoch's companies published two novels by his wife: Family Business (1988) and Coming to Terms (1991). They divorced in June 1999. Anna Murdoch received a settlement of US$1.2 billion in assets. On 25 June 1999, 17 days after divorcing his second wife, Murdoch, then aged 68, married Chinese-born Wendi Deng. She was 30, a recent Yale School of Management graduate, and a newly appointed vice-president of his STAR TV. Murdoch had two daughters with her: Grace (born 2001) and Chloe (born 2003). Near the end of his marriage to Wendi, hearsay concerning a link with Chinese intelligence (which was later proven to be unfounded) became problematic to their relationship. On 13 June 2013, a News Corporation spokesperson confirmed that Murdoch filed for divorce from Deng in New York City, US. According to the spokesman, the marriage had been irretrievably broken for more than six months. Murdoch also ended his long-standing friendship with Tony Blair after suspecting him of having an affair with Deng while they were still married. On 11 January 2016, Murdoch announced his engagement to former model Jerry Hall in a notice in The Times newspaper. On 4 March 2016, Murdoch, a week short of his 85th birthday, and 59-year-old Hall were married in London, at St Bride's, Fleet Street with a reception at Spencer House; this was Murdoch's fourth marriage. In June 2022, The New York Times reported that Murdoch and Hall were set to divorce, citing two anonymous sources. Hall filed for divorce on 1 July 2022 citing irreconcilable differences; the divorce was finalised in August 2022. During Saint Patrick's Day celebrations in 2023, Murdoch, who is quarter Irish, proposed to his partner, Ann Lesley Smith. The engaged couple first met at an event in September 2022. In April 2023, two weeks after the couple were engaged, Murdoch suddenly called off the engagement. The split was said to be caused by Murdoch's discomfort with Smith's religious views and her infatuation with Fox News host Tucker Carlson, reportedly referring to him as "a messenger from God". Carlson was fired from Fox News three weeks later. Murdoch became engaged again in March 2024, to retired Russian molecular biologist Elena Zhukova, who is also the ex-wife of Russian businessman Alexander Zhukov. Their wedding was held in June 2024 at Murdoch's estate in California. Murdoch was 93 and Zhukova 67 years old. Through the marriage he became stepfather to Dasha Zhukova, ex-wife of Roman Abramovich. Murdoch has six children, and is grandfather to thirteen grandchildren. His eldest child, Prudence MacLeod, was appointed on 28 January 2011 to the board of Times Newspapers Ltd, part of News International, which publishes The Times and The Sunday Times. Murdoch's elder son Lachlan, formerly the Deputy Chief Operating Officer at the News Corporation and publisher of the New York Post, was Murdoch's heir apparent before resigning from his executive posts at the global media company at the end of July 2005. Lachlan's departure left James Murdoch, Chief Executive of the satellite television service British Sky Broadcasting since November 2003 as the only Murdoch son still directly involved with the company's operations, though Lachlan has agreed to remain on the News Corporation's board. After graduating from Vassar College and marrying classmate Elkin Kwesi Pianim (the son of Ghanaian financial and political mogul Kwame Pianim) in 1993, Murdoch's daughter Elisabeth and her husband purchased a pair of NBC-affiliate television stations in California, KSBW and KSBY, with a $35 million loan provided by her father. By quickly re-organising and re-selling them at a $12 million profit in 1995, Elisabeth emerged as an unexpected rival to her brothers for the eventual leadership of the publishing dynasty. But, after divorcing Pianim in 1998 and quarrelling publicly with her assigned mentor Sam Chisholm at BSkyB, she struck out on her own as a television and film producer in London. She has since enjoyed independent success, in conjunction with her second husband, Matthew Freud, the great-grandson of Sigmund Freud, whom she met in 1997 and married in 2001. Until September 2023, it was not known how long Murdoch would remain as News Corporation's CEO. For a while the American cable television entrepreneur John Malone was the second-largest voting shareholder in News Corporation after Murdoch himself, potentially undermining the family's control. In 2007, the company announced that it would sell certain assets and give cash to Malone's company in exchange for its stock. In 2007, the company issued Murdoch's older children voting stock. Murdoch has two children with Wendi Deng: Grace (b. New York, November 2001) and Chloe (b. New York, July 2003). It was revealed in September 2011 that Tony Blair is Grace's godfather. There is reported to be tension between Murdoch and his oldest children over the terms of a trust holding the family's 28.5% stake in News Corporation, estimated in 2005 to be worth about $6.1 billion. Under the trust, his children by Wendi Deng share in the proceeds of the stock but have no voting privileges or control of the stock. Voting rights in the stock are divided 50/50 between Murdoch on the one side and his children of his first two marriages. Murdoch's voting privileges are not transferable but will expire upon his death and the stock will then be controlled solely by his children from the prior marriages, although their half-siblings will continue to derive their share of income from it. It is Murdoch's stated desire to have his children by Deng given a measure of control over the stock proportional to their financial interest in it. It does not appear that he has any strong legal grounds to contest the present arrangement, and both ex-wife Anna and their three children are said to be strongly resistant to any such change. In the arts and media In 1999, the Ted Turner-owned TBS channel aired an original sitcom, The Chimp Channel. This featured an all-simian cast and the role of an Australian TV veteran named Harry Waller. The character is described as "a self-made gazillionaire with business interests in all sorts of fields. He owns newspapers, hotel chains, sports franchises and genetic technologies, as well as everyone's favourite cable TV channel, The Chimp Channel". Waller is thought to be a parody of Murdoch, a long-time rival of Turner. In 2004, the movie Outfoxed: Rupert Murdoch's War on Journalism included many interviews accusing Fox News of pressuring reporters to report only one side of news stories, in order to influence viewers' political opinions. In 2012, the satirical telemovie Hacks broadcast on the UK's Channel 4, made obvious comparisons with Murdoch using the fictional character "Stanhope Feast", portrayed by Michael Kitchen, as well as other central figures in the phone hacking scandal. The 2013 film Anchorman 2: The Legend Continues features an Australian character inspired by Rupert Murdoch who owns a cable news television channel. Murdoch was part of the inspiration for Logan Roy, the protagonist of TV show Succession (2018–2023), who is portrayed by Brian Cox. Murdoch has also been played by the following people in films and TV series: Murdoch and rival newspaper and publishing magnate Robert Maxwell are thinly fictionalised as "Keith Townsend" and "Richard Armstrong" in The Fourth Estate (1996) by British novelist and former MP Jeffrey Archer. In the novel Dunbar (2017) by Edward St Aubyn the eponymous lead character is at least partly inspired by Murdoch. Young Rupert: The Making of the Murdoch Empire (2023), by Walter Marsh, has been praised for the high quality of its research. It focuses on Murdoch as a child and young man, in particular his early career at The News in Adelaide and his relationship with the editor-in-chief Rohan Rivett. Several commenters on the book remarked on Murdoch's embrace of socialism in his early years. Towards the end of his touring career, Eagles drummer and lead singer Don Henley would often dedicate his 1982 hit "Dirty Laundry" to Murdoch and Bill O'Reilly. Australian psychedelic rock band King Gizzard & the Lizard Wizard wrote the track "Evilest Man" about Murdoch, for their 2022 album Omnium Gatherum. Influence, wealth, and reputation In 2014, Forbes estimated Rupert Murdoch's wealth at US$13.7 billion. In 2016, Forbes ranked "Rupert Murdoch & Family" as the 35th most powerful person in the world. According to Forbes' 2017 real-time list of world's billionaires, Murdoch was the 34th richest person in the US and the 96th richest person in the world, with a net worth of US$13.1 billion as of February 2017.[update] In 2019, the Murdoch family were ranked 52nd in the Forbes' annual list of the world's billionaires. As per Forbes list of The Richest People In The World, dated 8 March 2024, Murdoch and family were ranked #100 with a net worth of $19.5 Billion. In August 2013, Terry Flew, Professor of Media and Communications at Queensland University of Technology, wrote an article for the Conversation publication in which he investigated a claim by former Australian prime minister Kevin Rudd that Murdoch owned 70% of Australian newspapers in 2011. Flew's article showed that News Corp Australia owned 23% of the nation's newspapers in 2011, according to the Finkelstein Review of Media and Media Regulation, but, at the time of the article, the corporation's titles accounted for 59% of the sales of all daily newspapers, with weekly sales of 17.3 million copies. In connection with Murdoch's testimony to the Leveson Inquiry "into the ethics of the British press", editor of Newsweek International, Tunku Varadarajan, referred to him as "the man whose name is synonymous with unethical newspapers". News Corp papers were accused of supporting the campaign of the Australian Liberal government and influencing public opinion during the 2013 federal election. Following the announcement of the Liberal Party victory at the polls, Murdoch tweeted "Aust. election public sick of public sector workers and phony welfare scroungers sucking life out of economy. Other nations to follow in time." In November 2015, former Australian prime minister Tony Abbott said that Murdoch "arguably has had more impact on the wider world than any other living Australian". In late 2015, The Wall Street Journal journalist John Carreyrou began a series of investigative articles on Theranos, the blood-testing start-up founded by Elizabeth Holmes, that questioned its claim to be able to run a wide range of lab tests from a tiny sample of blood from a finger prick. Holmes had turned to Murdoch, whose media empire includes Carreyrou's employer, The Wall Street Journal, to kill the story. Murdoch, who became the biggest investor in Theranos in 2015 as a result of his $125 million injection, refused the request from Holmes saying that "he trusted the paper's editors to handle the matter fairly." In November 2021, Murdoch accused, without providing evidence, Google and Facebook of stifling conservative viewpoints on its platforms, and called for "substantial reform" and openness in the digital ad supply chain. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-149] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Annexation] | [TOKENS: 4079] |
Contents Annexation Annexation, in international law, is the forcible acquisition and assertion of legal title over one state's territory by another state, usually following military occupation of the territory. In current international law, it is generally held to be an illegal act. It is distinct from the concepts of conquest,[a] which describes the gaining of physical control, and cession, in which territory is given or sold through treaty. Annexation can be legitimized if generally recognized by other states and international bodies. The illegality of annexation means that states carrying out such acts usually avoid using the word annexation in describing their actions; in each of the unresolved annexations by Israel, Morocco and Russia, the states have avoided characterizing their actions as such. Evolution of international law International law regarding the use of force by states evolved significantly in the 20th century. Key agreements include the 1907 Porter Convention, the 1920 Covenant of the League of Nations and the 1928 Kellogg–Briand Pact,[b] culminating in Article 2(4) of Chapter I of the United Nations Charter, which is in force today: "All Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations". These principles were reconfirmed by the 1970 Friendly Relations Declaration. Since the use of force against territorial integrity or political independence is illegal, the question as to whether title or sovereignty can be transferred in such a situation has been the subject of legal debate. '[A]nnexation by the use of force of the territory of another State or part thereof' is an act of aggression according to the Rome Statute of the International Criminal Court. Illegally annexed territory is considered as still occupied under international law and the provisions of international humanitarian law continue to apply. For precision, such territory may be referred to as "occupied and illegally annexed". In a report to the United Nations General Assembly, Michael Lynk contrasted de jure annexation as a formal declaration by a state that it is claiming permanent sovereignty over territory and de facto annexation without the formal declaration as a descriptive term for a state establishing facts on the ground as the prelude to a future claim of sovereignty. The Fourth Geneva Convention (GCIV) of 1949 amplified the Hague Conventions of 1899 and 1907 with respect to the question of the protection of civilians, and the rules regarding inviolability of rights have "an absolute character", making it much more difficult for a state to bypass international law through the use of annexation.[c] Annexations since the founding of the United Nations During the 1967 Six-Day War, Israel captured East Jerusalem, a part of the West Bank, from Jordan. While Jordan had annexed the West Bank in 1950, it was considered an illegal occupation and Jordan was recognized as the legal sovereign by the United Kingdom and possibly Pakistan. On 31 July 1988, Jordan relinquished this claim. It has remained occupied until the present day. On 27 June 1967, Israel unilaterally extended its law and jurisdiction to East Jerusalem and some of the surrounding area, incorporating about 70 square kilometers of territory into the Jerusalem Municipality. Although at the time, Israel informed the United Nations that its measures constituted administrative and municipal integration rather than annexation. Later rulings by the Israeli Supreme Court indicated that East Jerusalem had become part of Israel. In 1980, Israel passed the Jerusalem Law as part of its Basic Law, which declared Jerusalem the "complete and united" capital of Israel. In other words, Israel purported to annex East Jerusalem. The annexation was declared null and void by United Nations Security Council (UNSC) resolutions 252, 267, 271, 298, 465, 476 and 478. Jewish neighborhoods have since been built in East Jerusalem, and Israeli Jews have since also settled in Arab neighborhoods there, though some Jews may have returned from their 1948 expulsion after the Battle for Jerusalem. Only Costa Rica recognized Israel's annexation of East Jerusalem, and those countries who maintained embassies in Israel did not move them to Jerusalem. The United States Congress passed the Jerusalem Embassy Act, which recognizes Jerusalem as the united capital of Israel and requires the relocation of the U.S. embassy there in 1995. The act included a provision permitting the president to delay its implementation due to national security concerns. This waiver was used by presidents Clinton, Bush, Obama, and Trump, but was allowed to expire in 2019. Law professor Omar M. Dajani and others discuss de facto annexation (also referred to as "creeping annexation"). The debate considers whether, in all the circumstances, there is a pattern of behavior sufficient to conclude that Israel is in violation of the international prohibition against annexation, even absent a formal declaration. Israel occupied two-thirds of the Golan Heights from Syria during the 1967 Six-Day War, and subsequently built Jewish settlements in the area. In 1981, Israel passed the Golan Heights Law, which extended Israeli "law, jurisdiction, and administration" to the area, including the Shebaa farms area. This declaration was declared "null and void and without international legal effect" by United Nations Security Council Resolution 497. The Federated States of Micronesia recognized the annexation and in 2019, the United States joined in recognition. The vast majority of Syrian Druze in Majdal Shams, the largest Syrian village in the Golan, have held onto their Syrian passports. When Israel annexed the Golan Heights in 1981, 95% of the Majdal Shams residents refused Israeli citizenship, and are still firmly of that opinion, in spite of the Syrian Civil War. On 29 November 2012, the United Nations General Assembly reaffirmed it was "[d]eeply concerned that Israel has not withdrawn from the Syrian Golan, which has been under occupation since 1967, contrary to the relevant Security Council and General Assembly resolutions," and "[s]tress[ed] the illegality of the Israeli settlement construction and other activities in the occupied Syrian Golan since 1967." The General Assembly then voted by majority, 110 in favour to 6 against (Canada, Israel, Marshall Islands, Federated States of Micronesia, Palau, United States), with 59 abstentions, to demand a full Israeli withdrawal from the Syrian Golan Heights. On 25 March 2019, the United States recognized the Golan Heights as sovereign Israeli territory. In response, United Nations Secretary-General António Guterres stated "the status of Golan has not changed", and the decision received worldwide condemnation with European members of the United Nations Security Council noting "we raise our strong concerns about the broader consequences of recognizing illegal annexation and also about broader regional consequences" and that "annexation of territory by force is prohibited under international law", adding that unilateral changes to borders violate "the rules-based international order and the UN Charter". The annexation of Western Sahara occurred in two stages: 1976 and 1979. In 1975, and following the Madrid Accords between Mauritania, Morocco, and Spain, the last Spanish troops withdrew from the territory and ceded the administration to Mauritania and Morocco. On 14 April 1976, the two countries annexed it between themselves via the Western Sahara partition agreement. This was challenged by an independentist movement, the Polisario Front that waged a guerrilla war against both Mauritania and Morocco. On 14 August 1979, after a military putsch, Mauritania renounced all territorial claims to Western Sahara and withdrew its troops. This prompted Morocco to extend its annexation to formerly Mauritanian-controlled areas. A United Nations peace process was initiated in 1991, but it has been stalled, and as of mid-2012, the UN is holding direct negotiations between Morocco and the Polisario front to reach a solution to the conflict. The Sahrawi Arab Democratic Republic is a partially recognized state that has claimed the entire region since 1976. In March 2014, Russia annexed the Crimean Peninsula, which had been a part of Ukraine since 1991 and administers the territory as two federal subjects – the Republic of Crimea and the federal city of Sevastopol. The UN General Assembly considers the Russian possession of Crimea and Sevastopol to be an "attempted annexation" and the Russian Federation an "occupying power". Russia rejects the view that this was an annexation and regards it as an accession to the Russian Federation of a state that had just declared independence from Ukraine following a disputed referendum, and considers it secession as a result of irredentism. A term often used in Russia to describe these events is "re-unification" (воссоединение) to highlight the fact that Crimea was a part of the Russian Empire from 1783 to 1917, and part of the Russian SFSR from 1921 to 1954. Few states recognize this view. Ukraine considers Crimea and Sevastopol its own territory, and oversees the Crimea Platform, an international diplomatic initiative to restore its sovereignty. In September 2022, during the Russo-Ukrainian war, the Russian Federation, following referendums, declared the annexation of territories in southern and eastern Ukraine. As a result, Russia claimed sovereignty over the territories of five Ukrainian oblasts – Luhansk, Donetsk, Zaporizhzhia, Kherson and Mykolaiv – and recognised as its federal subjects Donetsk People's Republic, Luhansk People's Republic, Zaporizhzhia and Kherson Oblasts. In 1952, Ethiopian Emperor Haile Selassie orchestrated a federation with Eritrea. He dissolved it in 1962 and annexed Eritrea, resulting in the Eritrean War of Independence. In 1979, Mauritania abandoned its claim to the southern third of Western Sahara, disestablishing the short-lived province of Western Tiris. The part of former Mandatory Palestine occupied by Jordan during the 1948 Arab–Israeli War was renamed "the West Bank". It was annexed to Jordan in 1950 at the request of a Palestinian delegation. It had been questioned, however, how representative that delegation was, and at the insistence of the Arab League, Jordan was considered a trustee only. Only Pakistan and the United Kingdom recognized the annexation by Jordan. It was not condemned by the United Nations Security Council and it remained under Jordanian rule until 1967 when it was occupied by Israel. Jordan did not officially relinquish its claim to rule the West Bank until 1988. Israel has not taken the step of annexing the territory (except for the part of it that was made part of the Jerusalem Municipality). Rather, there were enacted a complex (and highly controversial) system of military government decrees in effect applying Israeli law in many spheres to Israeli settlements. Following an Indonesian invasion in 1975, East Timor (Timor-Leste) was annexed by Indonesia and was known as Timor Timur. It was regarded by Indonesia as the country's 27th province, but this was never recognised by the United Nations. The people of East Timor resisted Indonesian forces in a prolonged guerrilla campaign. Following a referendum held in 1999 under a UN-sponsored agreement between the two sides, the people of East Timor rejected the offer of autonomy within Indonesia. East Timor achieved independence in 2002 and is now officially known as Timor-Leste. After being allied with Iraq during the Iran–Iraq War (largely due to desiring Iraqi protection from Iran), Kuwait was invaded and annexed by Iraq (under Saddam Hussein) in August 1990. Hussein's primary justifications included a charge that Kuwaiti territory was in fact an Iraqi province, and that annexation was retaliation for "economic warfare" Kuwait had waged through slant drilling into Iraq's oil supplies. The monarchy was deposed after annexation, and an Iraqi governor installed. United States president George H. W. Bush ultimately condemned Iraq's actions, and moved to drive out Iraqi forces. Authorized by United Nations Security Council resolutions, an American-led coalition of 34 nations fought the Gulf War to reinstate the Kuwaiti Emir. Iraq's invasion (and annexation) was deemed illegal and Kuwait remains an independent nation today. The rule of the Qing dynasty over Tibet was established after a Qing expedition force defeated the Dzungar Khanate which had occupied Tibet in 1720, and lasted until the fall of the Qing dynasty in 1912. The Imperial Edict of the Abdication of the Qing Emperor issued in 1912 provided the legal basis for the Republic of China (ROC) to inherit all Qing territories, including Tibet. However, the ROC had no effective control over Tibet from 1912 to 1951; In the opinion of the Chinese government, this condition does not represent Tibet's de jure independence as many other parts of China also enjoyed de facto independence when the Chinese state was torn by warlordism, Japanese invasion, and civil war. Tibet came under the control of the People's Republic of China (PRC) after attempts by the Government of Tibet to gain international recognition, efforts to modernize its military, negotiations between the Government of Tibet and the PRC, and a military conflict in the Chamdo area of western Kham in October 1950. Many analysts consider the incorporation of Tibet into the PRC an annexation. If the actions of 1950 constituted an annexation, it was subsequently legalized by the Seventeen Point Agreement by the Government of Tibet in October 1951. From 1959 onwards, claims were made that this agreement was signed under pressure; academics have debated this ever since, but Tibet is recognized internationally as part of China. After the withdrawal of the British Empire from India, each of the Princely States of India and Pakistan that had been protectorates of the British Empire were given the choice of either 1. opting to join India, 2. opting to join Pakistan or 3. resume their former status as fully independent states. While most of the princely states opted to join either Pakistan or India, Hyderabad State elected instead to resume full independence. Following the expiration of ultimatums from India, the Indian military launched its Operation Polo on 13 September 1948 and invaded Hyderabad. After conquering most of Hyderabad in five days of warfare, the Nizam signed a treaty on 18 September 1948 that saw Hyderabad annexed by India. In 1954, the residents of Dadra and Nagar Haveli, a Portuguese enclave within India, ended Portuguese rule with the help of nationalist volunteers. From 1954 to 1961, the territory enjoyed de facto independence. In 1961, the territory was merged with India after its government signed an agreement with the Indian government. In 1961, India and Portugal engaged in a brief military conflict over Portuguese-controlled Goa and Daman and Diu. India invaded and conquered the areas after 36 hours of fighting, thus ending 451 years of Portuguese colonial rule in India. The action was viewed in India as a liberation of historically Indian territory; in Portugal, however, the loss of both enclaves was seen as a national tragedy. A condemnation of the action by the United Nations Security Council was vetoed by the Soviet Union. Goa and Daman and Diu were incorporated into India. Portugal recognized India's sovereignty over Goa in a treaty in December 1974. During the British colonial rule in India, Sikkim had an ambiguous status, as an Indian princely state or as an Indian protectorate. Prior to Indian independence, Jawaharlal Nehru, acting as the leader of Executive Council, agreed that Sikkim would not be treated as an Indian state. Between 1947 and 1950, Sikkim enjoyed de facto independence. However, Indian independence spurred popular political movements in Sikkim and the ruling Chogyal came under pressure. He requested Indian help to quell the uprising, which was offered. Subsequently, in 1950, India signed a treaty with Sikkim bringing it under its suzerainty, and controlling its external affairs, defence, diplomacy and communications. A state council was established in 1955 to allow for constitutional government under the Sikkimese monarch. Meanwhile, trouble was brewing in the state after the Sikkim National Congress demanded fresh elections and greater representation for the Nepalese. In the 1967 Nathu La and Cho La clashes, Chinese border attacks were repulsed. In 1973, riots in front of the palace led to a formal request for protection from India. The Chogyal was proving to be extremely unpopular with the people. In 1975, the Kazi (prime minister) appealed to the Indian Parliament for a change in Sikkim's status so that it could become a state of India. In April, the Indian Army moved into Sikkim, seizing the city of Gangtok and disarming the Palace Guards. A referendum was held in which 97.5% of the voting people (59% of the people entitled to vote) voted to join the Indian Union. A few weeks later, on 16 May 1975, Sikkim officially became the 22nd state of the Indian Union and the monarchy was abolished. Western New Guinea officially became part of Indonesia through the Act of Free Choice in 1969, supervised by the UN. Based on the New York Agreement (1962) between Indonesia and the Netherlands, the UN held the Act of Free Choice to determine whether Western New Guinea would join Indonesia or not. Through a deliberation of representatives, the majority voted to join, and this result was accepted by the UN General Assembly in a session in November 1969. This decision was implicitly considered to recognize Western New Guinea as part of Indonesia, because the General Assembly did not reject the report of the UN Secretary General regarding the implementation of Act of Free Choice. West Papua is the western half of the island of New Guinea and smaller islands to its west. The separatist Free Papua Movement (OPM) has engaged in a small-scale yet bloody conflict with the Indonesian military since the 1960s. North Vietnam de facto annexed South Vietnam following the military defeat and effective dissolution of its government system and military on 30 April 1975. Vietnam was officially reunited one year later as the merger of the Provisional Revolutionary Government of the Republic of South Vietnam and North Vietnam. One example of a claimed annexation after World War II is the Kingdom of Norway's southward expansion of the dependent territory Queen Maud Land. On most maps, there had been an unclaimed area between Queen Maud Land's borders of 1939 and the South Pole until 12 June 2015, when Norway formally claimed to have annexed that area. On 18 September 1955 at precisely 10:16 am, Rockall was declared officially annexed by the British Crown when Lieutenant-Commander Desmond Scott RN, Sergeant Brian Peel RM, Corporal AA Fraser RM, and James Fisher (a civilian naturalist and former Royal Marine), were deposited on the island by a Royal Navy helicopter from HMS Vidal (coincidentally named after the man who first charted the island). The team cemented in a brass plaque on Hall's Ledge and hoisted the Union Flag to stake the UK's claim. However, any effect of this annexation on valuable maritime rights claims under UNCLOS in the waters beyond 12 nautical miles from Rockall is neither claimed by Britain nor recognised by Denmark (for the Faroe Islands), Iceland or Ireland.[citation needed] See also References Notes Further reading |
======================================== |
[SOURCE: https://news.ycombinator.com/item?id=47086181] | [TOKENS: 34116] |
Tech summary: - 15k tok/sec on 8B dense 3bit quant (llama 3.1) - limited KV cache - 880mm^2 die, TSMC 6nm, 53B transistors - presumably 200W per chip - 20x cheaper to produce - 10x less energy per token for inference - max context size: flexible - mid-sized thinking model upcoming this spring on same hardware - next hardware supposed to be FP4 - a frontier LLM planned within twelve months This is all from their website, I am not affiliated. The founders have 25 years of career across AMD, Nvidia and others, $200M VC so far.Certainly interesting for very low latency applications which need < 10k tokens context. If they deliver in spring, they will likely be flooded with VC money.Not exactly a competitor for Nvidia but probably for 5-10% of the market.Back of napkin, the cost for 1mm^2 of 6nm wafer is ~$0.20. So 1B parameters need about $20 of die. The larger the die size, the lower the yield. Supposedly the inference speed remains almost the same with larger models.Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... - 15k tok/sec on 8B dense 3bit quant (llama 3.1) - limited KV cache - 880mm^2 die, TSMC 6nm, 53B transistors - presumably 200W per chip - 20x cheaper to produce - 10x less energy per token for inference - max context size: flexible - mid-sized thinking model upcoming this spring on same hardware - next hardware supposed to be FP4 - a frontier LLM planned within twelve months This is all from their website, I am not affiliated. The founders have 25 years of career across AMD, Nvidia and others, $200M VC so far.Certainly interesting for very low latency applications which need < 10k tokens context. If they deliver in spring, they will likely be flooded with VC money.Not exactly a competitor for Nvidia but probably for 5-10% of the market.Back of napkin, the cost for 1mm^2 of 6nm wafer is ~$0.20. So 1B parameters need about $20 of die. The larger the die size, the lower the yield. Supposedly the inference speed remains almost the same with larger models.Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... Certainly interesting for very low latency applications which need < 10k tokens context. If they deliver in spring, they will likely be flooded with VC money.Not exactly a competitor for Nvidia but probably for 5-10% of the market.Back of napkin, the cost for 1mm^2 of 6nm wafer is ~$0.20. So 1B parameters need about $20 of die. The larger the die size, the lower the yield. Supposedly the inference speed remains almost the same with larger models.Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... Not exactly a competitor for Nvidia but probably for 5-10% of the market.Back of napkin, the cost for 1mm^2 of 6nm wafer is ~$0.20. So 1B parameters need about $20 of die. The larger the die size, the lower the yield. Supposedly the inference speed remains almost the same with larger models.Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... Back of napkin, the cost for 1mm^2 of 6nm wafer is ~$0.20. So 1B parameters need about $20 of die. The larger the die size, the lower the yield. Supposedly the inference speed remains almost the same with larger models.Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... Interview with the founders: https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... reply 1) 16k tokens / second is really stunningly fast. There’s an old saying about any factor of 10 being a new science / new product category, etc. This is a new product category in my mind, or it could be. It would be incredibly useful for voice agent applications, realtime loops, realtime video generation, .. etc.2) https://nvidia.github.io/TensorRT-LLM/blogs/H200launch.html Has H200 doing 12k tokens/second on llama 2 12b fb8. Knowing these architectures that’s likely a 100+ ish batched run, meaning time to first token is almost certainly slower than taalas. Probably much slower, since Taalas is like milliseconds.3) Jensen has these pareto curve graphs — for a certain amount of energy and a certain chip architecture, choose your point on the curve to trade off throughput vs latency. My quick math is that these probably do not shift the curve. The 6nm process vs 4nm process is likely 30-40% bigger, draws that much more power, etc; if we look at the numbers they give and extrapolate to an fp8 model (slower), smaller geometry (30% faster and lower power) and compare 16k tokens/second for taalas to 12k tokens/s for an h200, these chips are in the same ballpark curve.However, I don’t think the H200 can reach into this part of the curve, and that does make these somewhat interesting. In fact even if you had a full datacenter of H200s already running your model, you’d probably buy a bunch of these to do speculative decoding - it’s an amazing use case for them; speculative decoding relies on smaller distillations or quants to get the first N tokens sorted, only when the big model and small model diverge do you infer on the big model.Upshot - I think these will sell, even on 6nm process, and the first thing I’d sell them to do is speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. 2) https://nvidia.github.io/TensorRT-LLM/blogs/H200launch.html Has H200 doing 12k tokens/second on llama 2 12b fb8. Knowing these architectures that’s likely a 100+ ish batched run, meaning time to first token is almost certainly slower than taalas. Probably much slower, since Taalas is like milliseconds.3) Jensen has these pareto curve graphs — for a certain amount of energy and a certain chip architecture, choose your point on the curve to trade off throughput vs latency. My quick math is that these probably do not shift the curve. The 6nm process vs 4nm process is likely 30-40% bigger, draws that much more power, etc; if we look at the numbers they give and extrapolate to an fp8 model (slower), smaller geometry (30% faster and lower power) and compare 16k tokens/second for taalas to 12k tokens/s for an h200, these chips are in the same ballpark curve.However, I don’t think the H200 can reach into this part of the curve, and that does make these somewhat interesting. In fact even if you had a full datacenter of H200s already running your model, you’d probably buy a bunch of these to do speculative decoding - it’s an amazing use case for them; speculative decoding relies on smaller distillations or quants to get the first N tokens sorted, only when the big model and small model diverge do you infer on the big model.Upshot - I think these will sell, even on 6nm process, and the first thing I’d sell them to do is speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. 3) Jensen has these pareto curve graphs — for a certain amount of energy and a certain chip architecture, choose your point on the curve to trade off throughput vs latency. My quick math is that these probably do not shift the curve. The 6nm process vs 4nm process is likely 30-40% bigger, draws that much more power, etc; if we look at the numbers they give and extrapolate to an fp8 model (slower), smaller geometry (30% faster and lower power) and compare 16k tokens/second for taalas to 12k tokens/s for an h200, these chips are in the same ballpark curve.However, I don’t think the H200 can reach into this part of the curve, and that does make these somewhat interesting. In fact even if you had a full datacenter of H200s already running your model, you’d probably buy a bunch of these to do speculative decoding - it’s an amazing use case for them; speculative decoding relies on smaller distillations or quants to get the first N tokens sorted, only when the big model and small model diverge do you infer on the big model.Upshot - I think these will sell, even on 6nm process, and the first thing I’d sell them to do is speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. However, I don’t think the H200 can reach into this part of the curve, and that does make these somewhat interesting. In fact even if you had a full datacenter of H200s already running your model, you’d probably buy a bunch of these to do speculative decoding - it’s an amazing use case for them; speculative decoding relies on smaller distillations or quants to get the first N tokens sorted, only when the big model and small model diverge do you infer on the big model.Upshot - I think these will sell, even on 6nm process, and the first thing I’d sell them to do is speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. Upshot - I think these will sell, even on 6nm process, and the first thing I’d sell them to do is speculative decoding for bread and butter frontier models. The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. I hope these guys make it! I bet the v3 of these chips will be serving some bread and butter API requests, which will be awesome. reply I often remind people two orders of quantitative change is a qualitative change.> The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.The real product they have is automation. They figured out a way to compile a large model into a circuit. That's, in itself, pretty impressive. If they can do this, they can also compile models to an HDL and deploy them to large FPGA simulators for quick validation. If we see models maturing at a "good enough" state, even a longer turnaround between model release and silicon makes sense.While I also see lots of these systems running standalone, I think they'll really shine combined with more flexible inference engines, running the unchanging parts of the model while the coupled inference engine deals with whatever is too new to have been baked into silicon.I'm concerned with the environmental impact. Chip manufacture is not very clean and these chips will need to be swapped out and replaced at a cadence higher than we currently do with GPUs. > The thing that I’m really very skeptical of is the 2 month turnaround. To get leading edge geometry turned around on arbitrary 2 month schedules is .. ambitious. Hopeful. We could use other words as well.The real product they have is automation. They figured out a way to compile a large model into a circuit. That's, in itself, pretty impressive. If they can do this, they can also compile models to an HDL and deploy them to large FPGA simulators for quick validation. If we see models maturing at a "good enough" state, even a longer turnaround between model release and silicon makes sense.While I also see lots of these systems running standalone, I think they'll really shine combined with more flexible inference engines, running the unchanging parts of the model while the coupled inference engine deals with whatever is too new to have been baked into silicon.I'm concerned with the environmental impact. Chip manufacture is not very clean and these chips will need to be swapped out and replaced at a cadence higher than we currently do with GPUs. The real product they have is automation. They figured out a way to compile a large model into a circuit. That's, in itself, pretty impressive. If they can do this, they can also compile models to an HDL and deploy them to large FPGA simulators for quick validation. If we see models maturing at a "good enough" state, even a longer turnaround between model release and silicon makes sense.While I also see lots of these systems running standalone, I think they'll really shine combined with more flexible inference engines, running the unchanging parts of the model while the coupled inference engine deals with whatever is too new to have been baked into silicon.I'm concerned with the environmental impact. Chip manufacture is not very clean and these chips will need to be swapped out and replaced at a cadence higher than we currently do with GPUs. While I also see lots of these systems running standalone, I think they'll really shine combined with more flexible inference engines, running the unchanging parts of the model while the coupled inference engine deals with whatever is too new to have been baked into silicon.I'm concerned with the environmental impact. Chip manufacture is not very clean and these chips will need to be swapped out and replaced at a cadence higher than we currently do with GPUs. I'm concerned with the environmental impact. Chip manufacture is not very clean and these chips will need to be swapped out and replaced at a cadence higher than we currently do with GPUs. reply reply The design ip at 6nm is still tough; I feel like this team must have at least one real genius and some incredibly good support at tsmc. Or they’ve been waiting a year for a slot :) reply "Ljubisa Bajic desiged video encoders for Teralogic and Oak Technology before moving over to AMD and rising through the engineering ranks to be the architect and senior manager of the company’s hybrid CPU-GPU chip designs for PCs and servers. Bajic did a one-year stint at Nvidia as s senior architect, bounced back to AMD as a director of integrated circuit design for two years, and then started Tenstorrent."His wife (COO) worked at Altera, ATI, AMD and Testorrent."Drago Ignjatovic, who was a senior design engineer working on AMD APUs and GPUs and took over for Ljubisa Bajic as director of ASIC design when the latter left to start Tenstorrent. Nine months later, Ignjatovic joined Tenstorrent as its vice president of hardware engineering, and he started Taalas with the Bajices as the startup’s chief technology officer."Not a youngster gang... His wife (COO) worked at Altera, ATI, AMD and Testorrent."Drago Ignjatovic, who was a senior design engineer working on AMD APUs and GPUs and took over for Ljubisa Bajic as director of ASIC design when the latter left to start Tenstorrent. Nine months later, Ignjatovic joined Tenstorrent as its vice president of hardware engineering, and he started Taalas with the Bajices as the startup’s chief technology officer."Not a youngster gang... "Drago Ignjatovic, who was a senior design engineer working on AMD APUs and GPUs and took over for Ljubisa Bajic as director of ASIC design when the latter left to start Tenstorrent. Nine months later, Ignjatovic joined Tenstorrent as its vice president of hardware engineering, and he started Taalas with the Bajices as the startup’s chief technology officer."Not a youngster gang... Not a youngster gang... reply reply reply reply There's already some good work on router benchmarking which is pretty interesting reply Abundance supports different strategies. One approach: Set a deadline for a response, send the turn to every AI that could possibly answer, and when the deadline arrives, cancel any request that hasn't yet completed. You know a priori which models have the highest quality in aggregate. Pick that one. reply reply I'm out of the loop on training LLMs, but to me it's just pure data input. Are they choosing to include more code rather than, say fiction books? reply From there you go to RL training, where humans are grading model responses, or the AI is writing code to try to pass tests and learning how to get the tests to pass, etc. The RL phase is pretty important because it's not passive, and it can focus on the weaker areas of the model too, so you can actually train on a larger dataset than the sum of recorded human knowledge. reply I desperately want there to be differentiation. Reality has shown over and over again it doesn’t matter. Even if you do same query across X models and then some form of consensus, the improvements on benchmarks are marginal and UX is worse (more time, more expensive, final answer is muddied and bound by the quality of the best model) reply https://agent-relay.com/ reply Can we use older (previous generation, smaller) models as a speculative decoder for the current model? I don't know whether the randomness in training (weight init, data ordering, etc) will affect this kind of use. To the extent that these models are learning the "true underlying token distribution" this should be possible, in principle. If that's the case, speculative decoding is an elegant vector to introduce this kind of tech, and the turnaround time is even less of a problem. reply reply reply Families of model sizes work great for speculative decoding. Use the 1B with the 32B or whatever.It's a balance as you want it to be guessing correctly as much as possible but also be as fast as possible. Validation takes time and every guess needs to be validated etcThe model you're using to speculate could be anything, but if it's not guessing what the main model would predict, it's useless. It's a balance as you want it to be guessing correctly as much as possible but also be as fast as possible. Validation takes time and every guess needs to be validated etcThe model you're using to speculate could be anything, but if it's not guessing what the main model would predict, it's useless. The model you're using to speculate could be anything, but if it's not guessing what the main model would predict, it's useless. reply Afaik it can work with anything, but sharing vocab solves a lot of headaches and the better token probs match, the more efficient it gets.Which is why it is usually done with same family models and most often NOT just different quantizations of the same model. Which is why it is usually done with same family models and most often NOT just different quantizations of the same model. reply reply > to get the first N tokens sorted, only when the big model and small model diverge do you infer on the big modelsuggests there is something I'm unaware of. If you compare the small and big model, don't you have to wait for the big model anyway and then what's the point? I assume I'm missing some detail here, but what? suggests there is something I'm unaware of. If you compare the small and big model, don't you have to wait for the big model anyway and then what's the point? I assume I'm missing some detail here, but what? reply More info:* https://research.google/blog/looking-back-at-speculative-dec...* https://pytorch.org/blog/hitchhikers-guide-speculative-decod... * https://research.google/blog/looking-back-at-speculative-dec...* https://pytorch.org/blog/hitchhikers-guide-speculative-decod... * https://pytorch.org/blog/hitchhikers-guide-speculative-decod... reply https://research.google/blog/speculative-cascades-a-hybrid-a... reply reply reply reply reply reply reply reply reply reply reply reply I’m really curious if context will really matter if using methods like Recursive Language Models. That method is suited to break down a huge amount of context into smaller subagents recursively, each working on a symbolic subset of the prompt.The challenge with RLM seemed like it burned through a ton of tokens to trade for more accuracy. If tokens are cheap, RLM seems like it could be beneficial here to provide much more accuracy over large contexts despite what the underlying model can handle0. https://arxiv.org/abs/2512.24601 The challenge with RLM seemed like it burned through a ton of tokens to trade for more accuracy. If tokens are cheap, RLM seems like it could be beneficial here to provide much more accuracy over large contexts despite what the underlying model can handle0. https://arxiv.org/abs/2512.24601 0. https://arxiv.org/abs/2512.24601 reply https://arxiv.org/abs/2511.06174https://arxiv.org/abs/2401.03868For a real world use case, you would need an FPGA with terabytes of RAM. Perhaps it'll be a Off chip HBM. But for s large models, even that won't be enough. Then you would need to figure out NV-link like interconnect for these FPGAs. And we are back to square one. https://arxiv.org/abs/2401.03868For a real world use case, you would need an FPGA with terabytes of RAM. Perhaps it'll be a Off chip HBM. But for s large models, even that won't be enough. Then you would need to figure out NV-link like interconnect for these FPGAs. And we are back to square one. For a real world use case, you would need an FPGA with terabytes of RAM. Perhaps it'll be a Off chip HBM. But for s large models, even that won't be enough. Then you would need to figure out NV-link like interconnect for these FPGAs. And we are back to square one. reply That's a lot of surface, isn't it? As big an M1 Ultra (2x M1 Max at 432mm² on TSMC N5P), a bit bigger than an A100 (820mm² on TSMC N7) or H100 (814mm² on TSMC N5).> The larger the die size, the lower the yield.I wonder if that applies? What's the big deal if a few parameter have a few bit flips? > The larger the die size, the lower the yield.I wonder if that applies? What's the big deal if a few parameter have a few bit flips? I wonder if that applies? What's the big deal if a few parameter have a few bit flips? reply We get into the sci-fi territory where a machine achieves sentience because it has all the right manufacturing defects.Reminds me of this https://en.wikipedia.org/wiki/A_Logic_Named_Joe Reminds me of this https://en.wikipedia.org/wiki/A_Logic_Named_Joe reply reply reply > There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. reply reply reply reply reply Intelligence is not as cool as you think it is. reply reply reply Yes, and that's exactly what they do.No, none of the problems you gave to the LLM while toying around with them are in any way novel. No, none of the problems you gave to the LLM while toying around with them are in any way novel. reply Do you not consider that novel problem solving? reply reply https://mathstodon.xyz/@tao/115855840223258103 https://huggingface.co/blog/dlouapre/gpt-single-minus-gluons https://deepmind.google/blog/alphaevolve-a-gemini-powered-co... https://huggingface.co/blog/dlouapre/gpt-single-minus-gluons https://deepmind.google/blog/alphaevolve-a-gemini-powered-co... https://deepmind.google/blog/alphaevolve-a-gemini-powered-co... reply reply reply I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes. reply reply reply But I think this specific claim is clearly wrong, if taken at face value:> They just regurgitate text compressed in their memoryThey're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.) > They just regurgitate text compressed in their memoryThey're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.) They're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.) reply You could imagine that it is possible to learn certain algorithms/ heuristics that "intelligence" is comprised of. No matter what you output. Training for optimal compression of tasks /taking actions -> could lead to intelligence being the best solution.This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this. This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this. reply reply reply And it’s a 3bit quant. So 3GB ram requirement.If they run 8B using native 16bit quant, it will use 60 H100 sized chips. If they run 8B using native 16bit quant, it will use 60 H100 sized chips. reply Are you sure about that? If true it would definitely make it look a lot less interesting. reply I assume they need all 10 chips for their 8B q3 model. Otherwise, they would have said so or they would have put a more impressive model as the demo.https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... reply 1. It doesn’t make sense in terms of architecture. It’s one chip. You can’t split one model over 10 identical hardwire chips2. It doesn’t add up with their claims of better power efficiency. 2.4kW for one model would be really bad. 2. It doesn’t add up with their claims of better power efficiency. 2.4kW for one model would be really bad. reply First, it is likely one chip for llama 8B q3 with 1k context size. This could fit into around 3GB of SRAM which is about the theoretical maximum for TSMC N6 reticle limit.Second, their plan is to etch larger models across multiple connected chips. It’s physically impossible to run bigger models otherwise since 3GB SRAM is about the max you can have on an 850mm2 chip. followed by a frontier-class large language model running inference across a collection of HC cards by year-end under its HC2 architecture https://mlq.ai/news/taalas-secures-169m-funding-to-develop-a... Second, their plan is to etch larger models across multiple connected chips. It’s physically impossible to run bigger models otherwise since 3GB SRAM is about the max you can have on an 850mm2 chip. followed by a frontier-class large language model running inference across a collection of HC cards by year-end under its HC2 architecture https://mlq.ai/news/taalas-secures-169m-funding-to-develop-a... followed by a frontier-class large language model running inference across a collection of HC cards by year-end under its HC2 architecture https://mlq.ai/news/taalas-secures-169m-funding-to-develop-a... reply > We have got this scheme for the mask ROM recall fabric – the hard-wired part – where we can store four bits away and do the multiply related to it – everything – with a single transistor. So the density is basically insane.I'm not a hardware guy but they seem to be making a strong distinction between the techniques they're using for the weights vs KV cache> In the current generation, our density is 8 billion parameters on the hard wired part of the chip., plus the SRAM to allow us to do KV caches, adaptations like fine tuning, and etc. I'm not a hardware guy but they seem to be making a strong distinction between the techniques they're using for the weights vs KV cache> In the current generation, our density is 8 billion parameters on the hard wired part of the chip., plus the SRAM to allow us to do KV caches, adaptations like fine tuning, and etc. > In the current generation, our density is 8 billion parameters on the hard wired part of the chip., plus the SRAM to allow us to do KV caches, adaptations like fine tuning, and etc. reply Not sure who started that "split into 10 chips" claim, it's just dumb.This is Llama 3B hardcoded (literally) on one chip. That's what the startup is about, they emphasize this multiple times. This is Llama 3B hardcoded (literally) on one chip. That's what the startup is about, they emphasize this multiple times. reply I was indeed wrong about 10 chips. I thought they would use llama 8B 16bit and a few thousand context size. It turns out, they used llama 8B 3bit with around 1k context size. That made me assume they must have chained multiple chips together since the max SRAM on TSMC n6 for reticle sized chip is only around 3GB. reply reply reply reply reply reply reply reply reply reply reply reply reply reply reply The focus here should be on the custom hardware they are producing and its performance, that is whats impressive. Imagine putting GLM-5 on this, that'd be insane.This reminds me a lot of when I tried the Mercury coder model by Inceptionlabs, they are creating something called a dLLM which is like a diffusion based llm. The speed is still impressive when playing aroun with it sometimes. But this, this is something else, it's almost unbelievable. As soon as I hit the enter key, the response appears, it feels instant.I am also curious about Taalas pricing.> Taalas’ silicon Llama achieves 17K tokens/sec per user, nearly 10X faster than the current state of the art, while costing 20X less to build, and consuming 10X less power.Do we have an idea of how much a unit / inference / api will cost?Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. This reminds me a lot of when I tried the Mercury coder model by Inceptionlabs, they are creating something called a dLLM which is like a diffusion based llm. The speed is still impressive when playing aroun with it sometimes. But this, this is something else, it's almost unbelievable. As soon as I hit the enter key, the response appears, it feels instant.I am also curious about Taalas pricing.> Taalas’ silicon Llama achieves 17K tokens/sec per user, nearly 10X faster than the current state of the art, while costing 20X less to build, and consuming 10X less power.Do we have an idea of how much a unit / inference / api will cost?Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. I am also curious about Taalas pricing.> Taalas’ silicon Llama achieves 17K tokens/sec per user, nearly 10X faster than the current state of the art, while costing 20X less to build, and consuming 10X less power.Do we have an idea of how much a unit / inference / api will cost?Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. > Taalas’ silicon Llama achieves 17K tokens/sec per user, nearly 10X faster than the current state of the art, while costing 20X less to build, and consuming 10X less power.Do we have an idea of how much a unit / inference / api will cost?Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. Do we have an idea of how much a unit / inference / api will cost?Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. Also, considering how fast people switch models to keep up with the pace. Is there really a potential market for hardware designed for one model only? What will they do when they want to upgrade to a better version? Throw the current hardware and buy another one? Shouldn't there be a more flexible way? Maybe only having to switch the chip on top like how people upgrade CPUs. I don't know, just thinking out loudly. reply https://www.nextplatform.com/wp-content/uploads/2026/02/taal...Probably they don't know what the market will bear and want to do some exploratory pricing, hence the "contact us" API access form. That's fair enough. But they're claiming orders of magnitude cost reduction.> Is there really a potential market for hardware designed for one model only?I'm sure there is. Models are largely interchangeable especially as the low end. There are lots of use cases where you don't need super smart models but cheapness and fastness can matter a lot.Think about a simple use case: a company has a list of one million customer names but no information about gender or age. They'd like to get a rough understanding of this. Mapping name -> guessed gender, rough guess of age is a simple problem for even dumb LLMs. I just tried it on ChatJimmy and it worked fine. For this kind of exploratory data problem you really benefit from mass parallelism, low cost and low latency.> Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. Probably they don't know what the market will bear and want to do some exploratory pricing, hence the "contact us" API access form. That's fair enough. But they're claiming orders of magnitude cost reduction.> Is there really a potential market for hardware designed for one model only?I'm sure there is. Models are largely interchangeable especially as the low end. There are lots of use cases where you don't need super smart models but cheapness and fastness can matter a lot.Think about a simple use case: a company has a list of one million customer names but no information about gender or age. They'd like to get a rough understanding of this. Mapping name -> guessed gender, rough guess of age is a simple problem for even dumb LLMs. I just tried it on ChatJimmy and it worked fine. For this kind of exploratory data problem you really benefit from mass parallelism, low cost and low latency.> Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. > Is there really a potential market for hardware designed for one model only?I'm sure there is. Models are largely interchangeable especially as the low end. There are lots of use cases where you don't need super smart models but cheapness and fastness can matter a lot.Think about a simple use case: a company has a list of one million customer names but no information about gender or age. They'd like to get a rough understanding of this. Mapping name -> guessed gender, rough guess of age is a simple problem for even dumb LLMs. I just tried it on ChatJimmy and it worked fine. For this kind of exploratory data problem you really benefit from mass parallelism, low cost and low latency.> Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. I'm sure there is. Models are largely interchangeable especially as the low end. There are lots of use cases where you don't need super smart models but cheapness and fastness can matter a lot.Think about a simple use case: a company has a list of one million customer names but no information about gender or age. They'd like to get a rough understanding of this. Mapping name -> guessed gender, rough guess of age is a simple problem for even dumb LLMs. I just tried it on ChatJimmy and it worked fine. For this kind of exploratory data problem you really benefit from mass parallelism, low cost and low latency.> Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. Think about a simple use case: a company has a list of one million customer names but no information about gender or age. They'd like to get a rough understanding of this. Mapping name -> guessed gender, rough guess of age is a simple problem for even dumb LLMs. I just tried it on ChatJimmy and it worked fine. For this kind of exploratory data problem you really benefit from mass parallelism, low cost and low latency.> Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. > Shouldn't there be a more flexible way?The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. The whole point of their design is to sacrifice flexibility for speed, although they claim they support fine tunes via LoRAs. LLMs are already supremely flexible so it probably doesn't matter. reply reply reply reply reply reply This is using hardwired weights with on-die SRAM used for K/V for example. It's WAY more power efficient and faster. The tradeoff being it's hardwired.Still, most frontier models are "good enough" where an obscenely fast version would be a major seller. Still, most frontier models are "good enough" where an obscenely fast version would be a major seller. reply https://chatjimmy.ai/ reply The answer wasn't dumb like others are getting. It was pretty comprehensive and useful. While the idea of a feline submarine is adorable, please be aware that building a real submarine requires significant expertise, specialized equipment, and resources. While the idea of a feline submarine is adorable, please be aware that building a real submarine requires significant expertise, specialized equipment, and resources. reply reply Generate lots of solutions and mix and match. This allows a new way to look at LLMs. reply reply reply reply reply reply reply reply reply reply It's not even close. It takes the eye 100mm .. 400ms to blink. This think takes under 30ms to process a small query - about 10 times faster. reply reply What's the moat with with these giant data-centers that are being built with 100's of billions of dollars on nvidia chips?If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem. If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem. reply reply reply (And people nowadays: "Who's Cisco?") reply reply reply I need some smarts to route my question to the correct model. I wont care which that is. Selling commodities is notorious for slow and steady growth. reply reply reply reply reply reply reply reply reply reply reply reply reply reply reply reply Me: "How many r's in strawberry?" Jimmy: There are 2 r's in "strawberry". Generated in 0.001s • 17,825 tok/s The question is not about how fast it is. The real question(s) are: 1. How is this worth it over diffusion LLMs (No mention of diffusion LLMs at all in this thread) (This also assumes that diffusion LLMs will get faster) 2. Will Talaas also work with reasoning models, especially those that are beyond 100B parameters and with the output being correct? 3. How long will it take to create newer models to be turned into silicon? (This industry moves faster than Talaas.) 4. How does this work when one needs to fine-tune the model, but still benefit from the speed advantages? 1. How is this worth it over diffusion LLMs (No mention of diffusion LLMs at all in this thread) (This also assumes that diffusion LLMs will get faster) 2. Will Talaas also work with reasoning models, especially those that are beyond 100B parameters and with the output being correct? 3. How long will it take to create newer models to be turned into silicon? (This industry moves faster than Talaas.) 4. How does this work when one needs to fine-tune the model, but still benefit from the speed advantages? 2. Will Talaas also work with reasoning models, especially those that are beyond 100B parameters and with the output being correct? 3. How long will it take to create newer models to be turned into silicon? (This industry moves faster than Talaas.) 4. How does this work when one needs to fine-tune the model, but still benefit from the speed advantages? reply I don't get these posts about ChatJimmy's intelligence. It's a heavily quantized Llama 3, using a custom quantization scheme because that was state of the art when they started. They claim they can update quickly (so I wonder why they didn't wait a few more months tbh and fab a newer model). Llama 3 wasn't very smart but so what, a lot of LLM use cases don't need smart, they need fast and cheap.Also apparently they can run DeepSeek R1 also, and they have benchmarks for that. New models only require a couple of new masks so they're flexible. Also apparently they can run DeepSeek R1 also, and they have benchmarks for that. New models only require a couple of new masks so they're flexible. reply reply reply Jimmy replied with, “2022 and 2023 openings:”0_0 0_0 reply reply reply reply reply reply reply reply reply reply reply reply reply reply I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down reply It isn't about model capability - it's about inference hardware. Same smarts, faster. reply reply reply 10b daily tokens growing at an average of 22% every week.There are plenty of times I look to groq for narrow domain responses - these smaller models are fantastic for that and there's often no need for something heavier. Getting the latency of reponses down means you can use LLM-assisted processing in a standard webpage load, not just for async processes. I'm really impressed by this, especially if this is its first showing. There are plenty of times I look to groq for narrow domain responses - these smaller models are fantastic for that and there's often no need for something heavier. Getting the latency of reponses down means you can use LLM-assisted processing in a standard webpage load, not just for async processes. I'm really impressed by this, especially if this is its first showing. reply reply For example, searching a database of tens of millions of text files. Very little "intelligence" is required, but cost and speed are very important. If you want to know something specific on Wikipedia but don't want to figure out which article to search for, you can just have an LLM read the entire English Wikipedia (7,140,211 articles) and compile a report. Doing that would be prohibitively expensive and glacially slow with standard LLM providers, but Taalas could probably do it in a few minutes or even seconds, and it would probably be pretty cheap. reply So many problems simply don't require a full LLM, but more than traditional software. Training a novel model isn't really a compelling argument at most tech startups right now, so you need to find an LLM-native way to do things. reply reply LLM's have opened-up natural language interface to machines. This chip makes it realtime. And that opens a lot of use-cases. reply reply Jokes aside, it's very promising. For sure a lucrative market down the line, but definitely not for a model of size 8B. I think lower level intellect param amount is around 80B (but what do I know). Best of luck! reply Snarky, but true. It is truly astounding, and feels categorically different. But it's also perfectly useless at the moment. A digital fidget spinner. reply do you have the foresight of a nematode? reply You don't actually need "frontier models" for Real Work (c).(Summarization, classification and the rest of the usual NLP suspects.) (Summarization, classification and the rest of the usual NLP suspects.) reply Like, give me semantic search that can detect the difference between SSL and TLS without needing to put a full LLM in the loop. reply If we are going for accuracy, the question should be asked multiple times on multiple models and see if there is agreement.But I do think once you hit 80B, you can struggle to see the difference between SOTA.That said, GPT4.5 was the GOAT. I can't imagine how expensive that one was to run. But I do think once you hit 80B, you can struggle to see the difference between SOTA.That said, GPT4.5 was the GOAT. I can't imagine how expensive that one was to run. That said, GPT4.5 was the GOAT. I can't imagine how expensive that one was to run. reply reply This requires 10 chips for an 8 billion q3 param model. 2.4kW.10 reticle sized chips on TSMC N6. Basically 10x Nvidia H100 GPUs.Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.Interesting design for niche applications.What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon? 10 reticle sized chips on TSMC N6. Basically 10x Nvidia H100 GPUs.Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.Interesting design for niche applications.What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon? Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.Interesting design for niche applications.What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon? Interesting design for niche applications.What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon? What is a task that is extremely high value, only require a small model intelligence, require tremendous speed, is ok to run on a cloud due to power requirements, AND will be used for years without change since the model is etched into silicon? reply reply reply reply > Model is etched onto the silicon chip. So can’t change anything about the model after the chip has been designed and manufactured.Subtle detail here: the fastest turnaround that one could reasonably expect on that process is about six months. This might eventually be useful, but at the moment it seems like the model churn is huge and people insist you use this week's model for best results. Subtle detail here: the fastest turnaround that one could reasonably expect on that process is about six months. This might eventually be useful, but at the moment it seems like the model churn is huge and people insist you use this week's model for best results. reply > The first generation HC1 chip is implemented in the 6 nanometer N6 process from TSMC. Each HC1 chip has 53 billion transistors on the package, most of it very likely for ROM and SRAM memory. The HC1 card burns about 200 watts, says Bajic, and a two-socket X86 server with ten HC1 cards in it runs 2,500 watts. https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod... reply reply reply reply reply reply reply reply This isn't ready for phones yet, but think of something like phones where people buy new ones every 3 years and even having a mediocre on-device model at that speed would be incredible for something like siri. reply reply Video game NPCs? reply reply reply reply reply reply reply Now, when reading, the eye moves in little jumps - saccades. They last 10's of ms, the eye is blind during them, and with high-quality tracking, you know quite early just where that foveal peephole is going to land. So handwave a budget of a few ms for trajectory analysis, a few for 200 Hz rendering latency, and you still have 10-ish ms to play with. At 20k tok/s, that's 200 tok.So perhaps one might JIT the next sentence, or the topic of the next paragraph, or the entire nature of the document, based on the user's attention. Imagine a universal document - you start reading, and you find the document is about, whatever you wanted it to be about? So perhaps one might JIT the next sentence, or the topic of the next paragraph, or the entire nature of the document, based on the user's attention. Imagine a universal document - you start reading, and you find the document is about, whatever you wanted it to be about? reply reply 1. Assume It's running a better model, even a dedicated coding model. High scoring but obviously not opus 4.5 2. Instead of the standard send-receive paradigm we set up a pipeline of agents, each of whom parses the output of the previous.At 17k/tps running locally, you could effectively spin up tasks like "you are an agent who adds semicolons to the end of the line in javascript", with some sort of dedicated software in the style of claude code you could load an array of 20 agents each with a role to play in improving outpus.take user input and gather context from codebase -> rewrite what you think the human asked you in the form of an LLM-optimized instructional prompt -> examine the prompt for uncertainties and gaps in your understanding or ability to execute -> <assume more steps as relevant> -> execute the workCould you effectively set up something that is configurable to the individual developer - a folder of system prompts that every request loops through?Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response? At 17k/tps running locally, you could effectively spin up tasks like "you are an agent who adds semicolons to the end of the line in javascript", with some sort of dedicated software in the style of claude code you could load an array of 20 agents each with a role to play in improving outpus.take user input and gather context from codebase -> rewrite what you think the human asked you in the form of an LLM-optimized instructional prompt -> examine the prompt for uncertainties and gaps in your understanding or ability to execute -> <assume more steps as relevant> -> execute the workCould you effectively set up something that is configurable to the individual developer - a folder of system prompts that every request loops through?Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response? take user input and gather context from codebase -> rewrite what you think the human asked you in the form of an LLM-optimized instructional prompt -> examine the prompt for uncertainties and gaps in your understanding or ability to execute -> <assume more steps as relevant> -> execute the workCould you effectively set up something that is configurable to the individual developer - a folder of system prompts that every request loops through?Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response? Could you effectively set up something that is configurable to the individual developer - a folder of system prompts that every request loops through?Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response? Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response? reply So in my opinion, in a scenario like this where the token output is near instant but you're running a lower tier model, good tooling can overcome the differences between a frontier cloud model. reply reply Basically logistically it's going to need to be in a data centre.It's ideal for small context high throughput. Perhaps parsing huge text piles like if you had the entire Epstein files as text.I think Claude code benefits from larger context to keep your entire project in view and deep reasoning.What this would certainly replace is when Claude dispatched to Haiku for manual NLP tasks. It's ideal for small context high throughput. Perhaps parsing huge text piles like if you had the entire Epstein files as text.I think Claude code benefits from larger context to keep your entire project in view and deep reasoning.What this would certainly replace is when Claude dispatched to Haiku for manual NLP tasks. I think Claude code benefits from larger context to keep your entire project in view and deep reasoning.What this would certainly replace is when Claude dispatched to Haiku for manual NLP tasks. What this would certainly replace is when Claude dispatched to Haiku for manual NLP tasks. reply I wonder how you cool a 3x3cm die that outputs 2.5 kW of heat. In the article they mention that the traditional setup requires water cooling, but surely this does as well, right? reply I'll take one with a frontier model please, for my local coding and home ai needs.. reply reply reply The slow word-by-word typing was what we started to get used to with LLMs.If these techniques get widespread, we may grow accustomed to the "old" speed again where content loads ~instantly.Imagine a content forest like Wikipedia instantly generated like a Minecraft word... If these techniques get widespread, we may grow accustomed to the "old" speed again where content loads ~instantly.Imagine a content forest like Wikipedia instantly generated like a Minecraft word... Imagine a content forest like Wikipedia instantly generated like a Minecraft word... reply reply A chatbot which tells you various fun facts is not the only use case for LLMs. They're language models first and foremost, so they're good at language processing tasks (where they don't "hallucinate" as much).Their ability to memorize various facts (with some "hallucinations") is an interesting side effect which is now abused to make them into "AI agents" and what not but they're just general-purpose language processing machines at their core. Their ability to memorize various facts (with some "hallucinations") is an interesting side effect which is now abused to make them into "AI agents" and what not but they're just general-purpose language processing machines at their core. reply reply reply reply reply Alternatively, ask yourself how plausible it sounds that all the facts in the world could be compressed into 8k parameters while remaining intact and fine-grained. If your answer is that it sounds pretty impossible... well it is. reply reply The size of the model does not factor as tiny models can use Internet to fetch factual information.But you think they are accurate repositories of knowledge, even though it's physically impossible unless lossless infinite compression algorithms exist (they don't, can't and won't). But you think they are accurate repositories of knowledge, even though it's physically impossible unless lossless infinite compression algorithms exist (they don't, can't and won't). reply reply Smaller models, not so much. reply reply What GP is expected to happen has happened around late 2024 ~ early 2025 when LLM frontends got web search feature. It's old tech now. reply reply reply reply reply reply reply reply What type of latency-sensitive applications are appropriate for a small-model, high-throughput solution like this? I presume this type of specialization is necessary for robotics, drones, or industrial automation. What else? reply """We're going to see a further bifurcation in inference use-cases in the next 12 months. I'm expecting this distinction to become prominent:(A) Massively parallel (optimize for token/$)(B) Serial low latency (optimize for token/s).Users will switch between A and B depending on need.Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" We're going to see a further bifurcation in inference use-cases in the next 12 months. I'm expecting this distinction to become prominent:(A) Massively parallel (optimize for token/$)(B) Serial low latency (optimize for token/s).Users will switch between A and B depending on need.Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" (A) Massively parallel (optimize for token/$)(B) Serial low latency (optimize for token/s).Users will switch between A and B depending on need.Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" (B) Serial low latency (optimize for token/s).Users will switch between A and B depending on need.Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" Users will switch between A and B depending on need.Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" Examples of (A):- "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" - "Use subagents to search this 1M line codebase for DRY violations subject to $spec."An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" An example of (B):- "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" - "Diagnose this one specific bug."- "Apply these text edits".(B) is used in funnels to unblock (A).""" - "Apply these text edits".(B) is used in funnels to unblock (A).""" (B) is used in funnels to unblock (A).""" """ reply 1. Intent based API gateways: convert natural language queries into structured API calls in real time (eg., "cancel my last order and refund it to the original payment method" -> authentication, order lookup, cancellation, refund API chain).2. Of course, realtime voice chat.. kinda like you see in movies.3. Security and fraud triage systems: parse logs without hardcoded regexes and issue alerts and full user reports in real time and decide which automated workflows to trigger.4. Highly interactive what-if scenarios powered by natural language queries.This effectively gives you database level speeds on top of natural language understanding. 2. Of course, realtime voice chat.. kinda like you see in movies.3. Security and fraud triage systems: parse logs without hardcoded regexes and issue alerts and full user reports in real time and decide which automated workflows to trigger.4. Highly interactive what-if scenarios powered by natural language queries.This effectively gives you database level speeds on top of natural language understanding. 3. Security and fraud triage systems: parse logs without hardcoded regexes and issue alerts and full user reports in real time and decide which automated workflows to trigger.4. Highly interactive what-if scenarios powered by natural language queries.This effectively gives you database level speeds on top of natural language understanding. 4. Highly interactive what-if scenarios powered by natural language queries.This effectively gives you database level speeds on top of natural language understanding. This effectively gives you database level speeds on top of natural language understanding. reply reply reply reply reply reply reply reply reply reply > me: the moon> Jimmy: The answer to "What is the capital of France?" I was looking for was the city of Paris, but that's not the correct response to the original question of the capital of France. The question that got cut off was actually "What is the capital of France?", and the response "There are plenty of times I look to groq for narrow domain responses" wasn't the answer I was looking for.It is certainly fast, but I think there might be some caching issues somewhere. > Jimmy: The answer to "What is the capital of France?" I was looking for was the city of Paris, but that's not the correct response to the original question of the capital of France. The question that got cut off was actually "What is the capital of France?", and the response "There are plenty of times I look to groq for narrow domain responses" wasn't the answer I was looking for.It is certainly fast, but I think there might be some caching issues somewhere. It is certainly fast, but I think there might be some caching issues somewhere. reply The quantization looks pretty severe, which could make the comparison chart misleading. But I tried a trick question suggested by Claude and got nearly identical results in regular ollama and with the chatbot. And quantization to 3 or 4 bits still would not get you that HOLY CRAP WTF speed on other hardware!This is a very impressive proof of concept. If they can deliver that medium-sized model they're talking about... if they can mass produce these... I notice you can't order one, so far. This is a very impressive proof of concept. If they can deliver that medium-sized model they're talking about... if they can mass produce these... I notice you can't order one, so far. reply Additionally LLMs have been tested, found valuable in benchmarks, but not used for a large number of domains due to speed and cost limitations. These spaces will eat up these chips very quickly. reply It could give a boost to the industry of electron microscopy analysis as the frontier model creators could be interested in extracting the weights of their competitors.The high speed of model evolution has interesting consequences on how often batches and masks are cycled. Probably we'll see some pressure on chip manufacturers to create masks more quickly, which can lead to faster hardware cycles. Probably with some compromises, i.e. all of the util stuff around the chip would be static, only the weights part would change. They might in fact pre-make masks that only have the weights missing, for even faster iteration speed. The high speed of model evolution has interesting consequences on how often batches and masks are cycled. Probably we'll see some pressure on chip manufacturers to create masks more quickly, which can lead to faster hardware cycles. Probably with some compromises, i.e. all of the util stuff around the chip would be static, only the weights part would change. They might in fact pre-make masks that only have the weights missing, for even faster iteration speed. reply reply reply Each chip is the size of an H100.So 80 H100 to run at this speed. Can’t change the model after you manufacture the chips since it’s etched into silicon. So 80 H100 to run at this speed. Can’t change the model after you manufacture the chips since it’s etched into silicon. reply reply reply reply 10 H100 chips for 3GB model.I think it’s a niche of a niche at this point.I’m not sure what optimization they can do since a transistor is a transistor. I think it’s a niche of a niche at this point.I’m not sure what optimization they can do since a transistor is a transistor. I’m not sure what optimization they can do since a transistor is a transistor. reply reply reply As an example, we've been experimenting with letting users search free form text, and using LLMs to turn that into a structured search fitting our setup. The latency on the response from any existing model simply kills this, its too high to be used for something where users are at most used to the delay of a network request + very little.There are plenty of other usecases like this where. There are plenty of other usecases like this where. reply reply The sheer speed of how fast this thing can “think” is insanity. reply reply reply reply reply "447 / 6144 tokens" "Generated in 0.026s • 15,718 tok/s" This is crazy fast. I always predicted this speed in ~2 years in the future, but it's here, now. reply reply reply reply There are use cases for fast/ultrafast inferrence models - classifying text, scoring things, extracting information - but for coding and other knowledge tasks - you're not going to get to your solution faster at 16,000 tokens/s if the solution never comes (or is the wrong one). reply reply > Write me 10 sentences about your favorite Subway sandwichClick buttonInstant! It was so fast I started laughing. This kind of speed will really, really change things Click buttonInstant! It was so fast I started laughing. This kind of speed will really, really change things Instant! It was so fast I started laughing. This kind of speed will really, really change things reply If you etch the bits into silicon, you then have to accommodate the bits by physical area, which is the transistor density for whatever modern process they use. This will give you a lower bound for the size of the wafers.This can give huge wafers for a very set model which is old by the time it is finalized.Etching generic functions used in ML and common fused kernels would seem much more viable as they could be used as building blocks. This can give huge wafers for a very set model which is old by the time it is finalized.Etching generic functions used in ML and common fused kernels would seem much more viable as they could be used as building blocks. Etching generic functions used in ML and common fused kernels would seem much more viable as they could be used as building blocks. reply If power costs are significantly lower, they can pay for themselves by the time they are outdated. It also means you can run more instances of a model in one datacenter, and that seems to be a big challenge these days: simply building an enough data centres and getting power to them. (See the ridiculous plans for building data centres in space)A huge part of the cost with making chips is the masks. The transistor masks are expensive. Metal masks less so.I figure they will eventually freeze the transistor layer and use metal masks to reconfigure the chips when the new models come out. That should further lower costs.I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out A huge part of the cost with making chips is the masks. The transistor masks are expensive. Metal masks less so.I figure they will eventually freeze the transistor layer and use metal masks to reconfigure the chips when the new models come out. That should further lower costs.I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out I figure they will eventually freeze the transistor layer and use metal masks to reconfigure the chips when the new models come out. That should further lower costs.I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out reply I really don't like the hallucination rate for most models but it is improving, so that is still far in the future.What I could see though, is if the whole unit they made would be power efficient enough to run on a robotics platform for human computer interaction.It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment. What I could see though, is if the whole unit they made would be power efficient enough to run on a robotics platform for human computer interaction.It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment. It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment. But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment. reply ^^^ I think the opposite is trueAnthropic and OpenAI are releasing new versions every 60-90 days it seems now, and you could argue they’re going to start releasing even faster Anthropic and OpenAI are releasing new versions every 60-90 days it seems now, and you could argue they’re going to start releasing even faster reply reply reply Per period of time, I’d say yes. reply reply reply reply reply reply reply So this is very cool. Though I'm not sure how the economics work out? 2 months is a long time in the model space. Although for many tasks, the models are now "good enough", especially when you put them in a "keep trying until it works" loop and run them at high inference speed.Seems like a chip would only be good for a few months though, they'd have to be upgrading them on a regular basis.Unless model growth plateaus, or we exceed "good enough" for the relevant tasks, or both. The latter part seems quite likely, at least for certain types of work.On that note I've shifted my focus from "best model" to "fastest/cheapest model that can do the job". For example testing Gemini Flash against Gemini Pro for simple tasks, they both complete the task fine, but Flash does it 3x cheaper and 3x faster. (Also had good results with Grok Fast in that category of bite-sized "realtime" workflows.) Seems like a chip would only be good for a few months though, they'd have to be upgrading them on a regular basis.Unless model growth plateaus, or we exceed "good enough" for the relevant tasks, or both. The latter part seems quite likely, at least for certain types of work.On that note I've shifted my focus from "best model" to "fastest/cheapest model that can do the job". For example testing Gemini Flash against Gemini Pro for simple tasks, they both complete the task fine, but Flash does it 3x cheaper and 3x faster. (Also had good results with Grok Fast in that category of bite-sized "realtime" workflows.) Unless model growth plateaus, or we exceed "good enough" for the relevant tasks, or both. The latter part seems quite likely, at least for certain types of work.On that note I've shifted my focus from "best model" to "fastest/cheapest model that can do the job". For example testing Gemini Flash against Gemini Pro for simple tasks, they both complete the task fine, but Flash does it 3x cheaper and 3x faster. (Also had good results with Grok Fast in that category of bite-sized "realtime" workflows.) On that note I've shifted my focus from "best model" to "fastest/cheapest model that can do the job". For example testing Gemini Flash against Gemini Pro for simple tasks, they both complete the task fine, but Flash does it 3x cheaper and 3x faster. (Also had good results with Grok Fast in that category of bite-sized "realtime" workflows.) reply Insane gains, makes me excited for the future. Imagine Opus-like responses in <1 second.I suspect power efficiency will be nearly entirely offset by increased usage but it’s more bang for watt. I suspect power efficiency will be nearly entirely offset by increased usage but it’s more bang for watt. reply reply Anyway, I imagine these are incredibly expensive, but if they ever sell them with Linux drivers and slotting into a standard PCIe it would be absolutely sick. At 3 kW that seems unlikely, but for that kind of speed I bet I could find space in my cabinet and just rip it. I just can't justify $300k, you know. reply The background on your site genuinely made me wonder what was wrong with my monitor. reply Which brings me to my second thing. We mostly pitch the AI wars as OpenAI vs Meta vs Claude vs Google vs etc. But another take is the war between open, locally run models and SaaS models, which really is about the war for general computing. Maybe a business model like this is a great tool to help keep general computing in the fight. reply reply reply They'll also be severely limited on context length as it needs to sit in SRAM. Looks like the current one tops out at 6144 tokens which I presume is a whole chips worth. You'd also have to dedicate a chip to a whole user as there's likely only enough SRAM for one user's worth of context. I wonder how much time it takes them to swap users in/out? I wouldn't be surprised if this chip is severely underutilized (can't use it all when running decode as you have to run token by token with one users and then idle time as you swap users in/out).Maybe a more realistic deployment would have chips for linear layers and chips for attention? You could batch users through the shared weight chips and then provision more or less attention chips as you want which would be per user (or shared amongst a small group 2-4 users). Maybe a more realistic deployment would have chips for linear layers and chips for attention? You could batch users through the shared weight chips and then provision more or less attention chips as you want which would be per user (or shared amongst a small group 2-4 users). reply I think this is how I'm going to get my dream of Opus 3.7 running locally, quickly and cheaply on my mid-tier MacBook in 2030. Amazing. Anthropic et al will be able to make marginal revenue from licensing the weights of their frontier-minus-minus models to these folks. reply reply I'm confident there are at least 1-2 OOMs of improvement to come here in terms of the (intelligence : wattage) ratio.I really thought we were going to need to see a couple of dramatic OOM-improvement changes to the model composition / software layer, in order to get models of Opus 3.7's capability running on our laptops.This release tells me that eventual breakthrough won't even be strictly necessary, imo. I really thought we were going to need to see a couple of dramatic OOM-improvement changes to the model composition / software layer, in order to get models of Opus 3.7's capability running on our laptops.This release tells me that eventual breakthrough won't even be strictly necessary, imo. This release tells me that eventual breakthrough won't even be strictly necessary, imo. reply reply * Many top quality tts and stt models* Image recognition, object tracking* speculative decoding, attached to a much bigger model (big/small architecture?)* agentic loop trying 20 different approaches / algorithms, and then picking the best one* edited to add! Put 50 such small models to create a SOTA super fast model * Image recognition, object tracking* speculative decoding, attached to a much bigger model (big/small architecture?)* agentic loop trying 20 different approaches / algorithms, and then picking the best one* edited to add! Put 50 such small models to create a SOTA super fast model * speculative decoding, attached to a much bigger model (big/small architecture?)* agentic loop trying 20 different approaches / algorithms, and then picking the best one* edited to add! Put 50 such small models to create a SOTA super fast model * agentic loop trying 20 different approaches / algorithms, and then picking the best one* edited to add! Put 50 such small models to create a SOTA super fast model * edited to add! Put 50 such small models to create a SOTA super fast model reply Also interesting implications for optimization-driven frameworks like DSPy. If you have an eval loop and useful reward function, you can iterate to the best possible response every time and ignore the cost of each attempt reply reply You What is your knowledge cut-off? Jimmy My knowledge cut-off is 2022, which means that my training data is current up to 2021, but I have been trained on a dataset that is updated periodically. If you have any specific questions about events or topics that occurred after 2021, I may not have information on those topics. However, I can still provide general information and context on those topics to help guide further research. The instantaneous response is impressive though. I'm sure there will be applications for this, I just lack the imagination to know what they'll be. reply reply reply reply Model intelligence is, in many ways, a function of model size. A small model tuned for a given domain is still crippled by being small.Some things don't benefit from general intelligence much. Sometimes a dumb narrow specialist really is all you need for your tasks. But building that small specialized model isn't easy or cheap.Engineering isn't free, models tend to grow obsolete as the price/capability frontier advances, and AI specialists are less of a commodity than AI inference is. I'm inclined to bet against approaches like this on a principle. Some things don't benefit from general intelligence much. Sometimes a dumb narrow specialist really is all you need for your tasks. But building that small specialized model isn't easy or cheap.Engineering isn't free, models tend to grow obsolete as the price/capability frontier advances, and AI specialists are less of a commodity than AI inference is. I'm inclined to bet against approaches like this on a principle. Engineering isn't free, models tend to grow obsolete as the price/capability frontier advances, and AI specialists are less of a commodity than AI inference is. I'm inclined to bet against approaches like this on a principle. reply This does not sound like it will simplify the training and data side, unless their or subsequent models can somehow be efficiently utilized for that. However, this development may lead to (open source) hardware and distributed system compilation, EDA tooling, bus system design, etc getting more deserved attention and funding. In turn, new hardware may lead to more training and data competition instead of the current NVIDIA model training monopoly market. So I think you're correct for ~5 years. reply reply So they use 3 bit values. Is that current thinking? LLMs started at 32-bit floats, and have gradually shrunk. 8-bit floats seem to work. Is 3 bits pushing it? reply reply Obviously not for any hard applications, but for significantly better autocorrect, local next word predictions, file indexing (tagging I suppose).The efficiency of such a small model should theoretically be great! The efficiency of such a small model should theoretically be great! reply It's an homage to Jensen: "The display is the computer"https://www.wired.com/2002/07/nvidia/ https://www.wired.com/2002/07/nvidia/ reply reply > It seems like "six seven" is likely being used to represent the number 17. Is that correct? If so, I'd be happy to discuss the significance or meaning of the number 17 with you. reply reply reply reply Test prompt: ```Please classify the sentiment of this post as "positive", "neutral" or "negative":Given the price, I expected very little from this case, and I was 100% right.``` Jimmy: Neutral.I tried various other examples that I had successfully "solved" with very early LLMs and the results were similarly bad. Please classify the sentiment of this post as "positive", "neutral" or "negative":Given the price, I expected very little from this case, and I was 100% right.``` Jimmy: Neutral.I tried various other examples that I had successfully "solved" with very early LLMs and the results were similarly bad. Given the price, I expected very little from this case, and I was 100% right.``` Jimmy: Neutral.I tried various other examples that I had successfully "solved" with very early LLMs and the results were similarly bad. ``` Jimmy: Neutral.I tried various other examples that I had successfully "solved" with very early LLMs and the results were similarly bad. I tried various other examples that I had successfully "solved" with very early LLMs and the results were similarly bad. reply reply reply Also, "10k tokens per second would be fantastic" might not be sufficient (even remotely) if you want to "process millions of log lines per minute".Assuming a single log line at just 100 tokens, you need (100 * 2 million / 60) ~ 3.3 million tokens per second processing speed :) Assuming a single log line at just 100 tokens, you need (100 * 2 million / 60) ~ 3.3 million tokens per second processing speed :) reply reply reply reply reply reply reply I just wanted some toast, but here I am installing an app, dismissing 10 popups, and maybe now arguing with a chat bot about how I don’t in fact want to turn on notifications. reply Also, what if Cerebras decided to make a wafer-sized FPGA array and turned large language models into lots and lots of logical gates? reply Show me something at a model size 80GB+ or this feels like "positive results in mice" reply This is great even if it can't ever run Opus. Many people will be extremely happy about something like Phi accessible at lightning speed. reply What does that mean for 8b models 24mo from now? reply reply Asides from the obvious concern that this is a tiny 8B model, I'm also a bit skeptical of the power draw. 2.4 kW feels a little bit high, but someone else should try doing the napkin math compared to the total throughput to power ratio on the H200 and other chips. reply So what's the use case for an extremely fast small model? Structuring vast amounts of unstructured data, maybe? Put it in a little service droid so it doesn't need the cloud? reply reply reply reply The idea is good though and could work. reply It's a bad idea that can't work well. Not while the field is advancing the way it is.Manufacturing silicon is a long pipeline - and in the world of AI, one year of capability gap isn't something you can afford. You build a SOTA model into your chips, and by the time you get those chips, it's outperformed at its tasks by open weights models half their size.Now, if AI advances somehow ground to a screeching halt, with model upgrades coming out every 4 years, not every 4 months? Maybe it'll be viable. As is, it's a waste of silicon. Manufacturing silicon is a long pipeline - and in the world of AI, one year of capability gap isn't something you can afford. You build a SOTA model into your chips, and by the time you get those chips, it's outperformed at its tasks by open weights models half their size.Now, if AI advances somehow ground to a screeching halt, with model upgrades coming out every 4 years, not every 4 months? Maybe it'll be viable. As is, it's a waste of silicon. Now, if AI advances somehow ground to a screeching halt, with model upgrades coming out every 4 years, not every 4 months? Maybe it'll be viable. As is, it's a waste of silicon. reply reply The prototype is: silicon with a Llama 3.1 8B etched into it. Today's 4B models already outperform it.Token rate in five digits is a major technical flex, but, does anyone really need to run a very dumb model at this speed?The only things that come to mind that could reap a benefit are: asymmetric exotics like VLA action policies and voice stages for V2V models. Both of which are "small fast low latency model backed by a large smart model", and both depend on model to model comms, which this doesn't demonstrate.In a way, it's an I/O accelerator rather than an inference engine. At best. Token rate in five digits is a major technical flex, but, does anyone really need to run a very dumb model at this speed?The only things that come to mind that could reap a benefit are: asymmetric exotics like VLA action policies and voice stages for V2V models. Both of which are "small fast low latency model backed by a large smart model", and both depend on model to model comms, which this doesn't demonstrate.In a way, it's an I/O accelerator rather than an inference engine. At best. The only things that come to mind that could reap a benefit are: asymmetric exotics like VLA action policies and voice stages for V2V models. Both of which are "small fast low latency model backed by a large smart model", and both depend on model to model comms, which this doesn't demonstrate.In a way, it's an I/O accelerator rather than an inference engine. At best. In a way, it's an I/O accelerator rather than an inference engine. At best. reply reply Which was always the killer assumption, and this changes little. reply If you look at any development in computing, ASICs are the next step. It seems almost inevitable. Yes, it will always trail behind state of the art. But value will come quickly in a few generations. reply reply reply reply reply reply reply reply reply reply Or is that the catch? Either way I am sure there will be some niche uses for it. reply reply reply reply One interesting thing is it keeps randomly emitting "ประก" (meaning "Announcement") and chartInstance. This is recalling the early days of GPT-2 when the light bulb went on that "hey, there's something groundbreaking here". reply reply With the next generation of Cerebras chips expected to be 5–7× faster, peak throughput could reach the ~17,500 tokens-per-second range. For smaller models like this, that level of performance is entirely realistic. So no, a general-purpose accelerator will likely continue to outperform a fixed-function ASIC with a specific model etched into it.Moreover, we’re only looking at results from a two-year-old, relatively small model. We still don’t know how this architecture will scale with a large MoE model, especially given constraints like limited on-chip KV cache and more complex attention mechanisms.The real test isn’t performance on a small benchmark model, it’s how the system handles large-scale, production-grade workloads under architectural constraints. Moreover, we’re only looking at results from a two-year-old, relatively small model. We still don’t know how this architecture will scale with a large MoE model, especially given constraints like limited on-chip KV cache and more complex attention mechanisms.The real test isn’t performance on a small benchmark model, it’s how the system handles large-scale, production-grade workloads under architectural constraints. The real test isn’t performance on a small benchmark model, it’s how the system handles large-scale, production-grade workloads under architectural constraints. reply reply reply Incredibly distracting. No way to turn it off (at least within what's provided without using something like devtools.) reply reply New models come out, time to upgrade your AI card, etc. reply reply reply reply reply reply reply reply reply reply reply reply reply I am building a data extraction software on top of emails, attachments, cloud/local files. I use a reverse template generation with only variable translation done by LLMs (3). Small models are awesome for this (4).I just applied for API access. If privacy policies are a fit, I would love to enable this for MVP launch.1. https://github.com/brainless/dwata2. https://youtu.be/Uhs6SK4rocU3. https://github.com/brainless/dwata/tree/feature/reverse-temp...4. https://github.com/brainless/dwata/tree/feature/reverse-temp... I just applied for API access. If privacy policies are a fit, I would love to enable this for MVP launch.1. https://github.com/brainless/dwata2. https://youtu.be/Uhs6SK4rocU3. https://github.com/brainless/dwata/tree/feature/reverse-temp...4. https://github.com/brainless/dwata/tree/feature/reverse-temp... 1. https://github.com/brainless/dwata2. https://youtu.be/Uhs6SK4rocU3. https://github.com/brainless/dwata/tree/feature/reverse-temp...4. https://github.com/brainless/dwata/tree/feature/reverse-temp... 2. https://youtu.be/Uhs6SK4rocU3. https://github.com/brainless/dwata/tree/feature/reverse-temp...4. https://github.com/brainless/dwata/tree/feature/reverse-temp... 3. https://github.com/brainless/dwata/tree/feature/reverse-temp...4. https://github.com/brainless/dwata/tree/feature/reverse-temp... 4. https://github.com/brainless/dwata/tree/feature/reverse-temp... reply reply reply Everyone in Capital wants the perpetual rent-extraction model of API calls and subscription fees, which makes sense given how well it worked in the SaaS boom. However, as Taalas points out, new innovations often scale in consumption closer to the point of service rather than monopolized centers, and I expect AI to be no different. When it’s being used sparsely for odd prompts or agentically to produce larger outputs, having local (or near-local) inferencing is the inevitable end goal: if a model like Qwen or Llama can output something similar to Opus or Codex running on an affordable accelerator at home or in the office server, then why bother with the subscription fees or API bills? That compounds when technical folks (hi!) point out that any process done agentically can instead just be output as software for infinite repetition in lieu of subscriptions and maintained indefinitely by existing technical talent and the same accelerator you bought with CapEx, rather than a fleet of pricey AI seats with OpEx.The big push seems to be building processes dependent upon recurring revenue streams, but I’m gradually seeing more and more folks work the slop machines for the output they want and then put it away or cancel their sub. I think Taalas - conceptually, anyway - is on to something. The big push seems to be building processes dependent upon recurring revenue streams, but I’m gradually seeing more and more folks work the slop machines for the output they want and then put it away or cancel their sub. I think Taalas - conceptually, anyway - is on to something. reply …for a privileged minority, yes, and to the detriment of billions of people whose names the history books conveniently forget. AI, like past technological revolutions, is a force multiplier for both productivity and exploitation. reply reply Someone mentioned that maybe we'd see a future where these things come in something like Nintendo cartridges. Want a newer model? Pop in the right catridge. reply reply Sounds like people drinking the Kool-Aid now.I don't reject that AI has use cases. But I do reject that it is promoted as "unprecedented amplifier" of human xyz anything. These folks would even claim how AI improves human creativity. Well, has this been the case? I don't reject that AI has use cases. But I do reject that it is promoted as "unprecedented amplifier" of human xyz anything. These folks would even claim how AI improves human creativity. Well, has this been the case? reply I'm progressing with my side projects like I've never before. reply reply Yes. Example: If you've never programmed in language X, but want to build something in it, you can focus on getting from 0 to 1 instead of being bogged down in the idiosyncrasies of said language. reply For many of us, the answer is an emphatic yes. reply reply reply reply Paging qntm... reply reply reply reply Whoever doesn’t buy/replicate this in the next year is dead. Imagine OpenAI trying to sell you a platform that takes 15 minutes, when someone else can do it in 0.001s. reply Was the story any good? The model used isn't comparable to something OpenAI would offer you today. What good is a fast gibberish generator? reply > Generate a fiction book about an old sorcerer who, like all elderly magicians, has to go back to studying magic for two years at a magic school in order to be granted the right to keep his wand and to keep doing magic at his home.Answer, generated in 0,063s @ 15 742 tok/s:> The Relearning of Eryndor Thorne> In the quaint village of Brindlemark, where ancient traditions and mystical energies coexisted, Eryndor Thorne had lived a life of quiet reverence. For nigh on five decades, he had wielded the arcane arts with precision and mastery, earning him the respect of his peers and the awe of the villagers. His home, a humble abode atop a hill overlooking the rolling hills and verdant valleys, was a sanctuary of sorts, a place where the very fabric of magic was woven into the air. His library was a treasure trove of ancient tomes, texts, and scrolls, filled with the wisdom of the ages. Yet, despite his mastery over the arcane, Eryndor felt the stirrings of a nagging concern. The once-renewable license to practice magic, a privilege granted by the Council of Mages, was on the cusp of expiration.> As all elderly mages knew, every two decades, the Council would demand that those who had spent a lifetime mastering the arcane arts return to the prestigious Wysteria Academy, the hallowed halls of magic learning, to relearn the foundational principles of magic. This was a ritual as old as the craft itself, a test of the elderly mage's understanding, to prove they still possessed the fundamental knowledge and skills required to practice magic with precision and safety.> Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. Answer, generated in 0,063s @ 15 742 tok/s:> The Relearning of Eryndor Thorne> In the quaint village of Brindlemark, where ancient traditions and mystical energies coexisted, Eryndor Thorne had lived a life of quiet reverence. For nigh on five decades, he had wielded the arcane arts with precision and mastery, earning him the respect of his peers and the awe of the villagers. His home, a humble abode atop a hill overlooking the rolling hills and verdant valleys, was a sanctuary of sorts, a place where the very fabric of magic was woven into the air. His library was a treasure trove of ancient tomes, texts, and scrolls, filled with the wisdom of the ages. Yet, despite his mastery over the arcane, Eryndor felt the stirrings of a nagging concern. The once-renewable license to practice magic, a privilege granted by the Council of Mages, was on the cusp of expiration.> As all elderly mages knew, every two decades, the Council would demand that those who had spent a lifetime mastering the arcane arts return to the prestigious Wysteria Academy, the hallowed halls of magic learning, to relearn the foundational principles of magic. This was a ritual as old as the craft itself, a test of the elderly mage's understanding, to prove they still possessed the fundamental knowledge and skills required to practice magic with precision and safety.> Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > The Relearning of Eryndor Thorne> In the quaint village of Brindlemark, where ancient traditions and mystical energies coexisted, Eryndor Thorne had lived a life of quiet reverence. For nigh on five decades, he had wielded the arcane arts with precision and mastery, earning him the respect of his peers and the awe of the villagers. His home, a humble abode atop a hill overlooking the rolling hills and verdant valleys, was a sanctuary of sorts, a place where the very fabric of magic was woven into the air. His library was a treasure trove of ancient tomes, texts, and scrolls, filled with the wisdom of the ages. Yet, despite his mastery over the arcane, Eryndor felt the stirrings of a nagging concern. The once-renewable license to practice magic, a privilege granted by the Council of Mages, was on the cusp of expiration.> As all elderly mages knew, every two decades, the Council would demand that those who had spent a lifetime mastering the arcane arts return to the prestigious Wysteria Academy, the hallowed halls of magic learning, to relearn the foundational principles of magic. This was a ritual as old as the craft itself, a test of the elderly mage's understanding, to prove they still possessed the fundamental knowledge and skills required to practice magic with precision and safety.> Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > In the quaint village of Brindlemark, where ancient traditions and mystical energies coexisted, Eryndor Thorne had lived a life of quiet reverence. For nigh on five decades, he had wielded the arcane arts with precision and mastery, earning him the respect of his peers and the awe of the villagers. His home, a humble abode atop a hill overlooking the rolling hills and verdant valleys, was a sanctuary of sorts, a place where the very fabric of magic was woven into the air. His library was a treasure trove of ancient tomes, texts, and scrolls, filled with the wisdom of the ages. Yet, despite his mastery over the arcane, Eryndor felt the stirrings of a nagging concern. The once-renewable license to practice magic, a privilege granted by the Council of Mages, was on the cusp of expiration.> As all elderly mages knew, every two decades, the Council would demand that those who had spent a lifetime mastering the arcane arts return to the prestigious Wysteria Academy, the hallowed halls of magic learning, to relearn the foundational principles of magic. This was a ritual as old as the craft itself, a test of the elderly mage's understanding, to prove they still possessed the fundamental knowledge and skills required to practice magic with precision and safety.> Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > As all elderly mages knew, every two decades, the Council would demand that those who had spent a lifetime mastering the arcane arts return to the prestigious Wysteria Academy, the hallowed halls of magic learning, to relearn the foundational principles of magic. This was a ritual as old as the craft itself, a test of the elderly mage's understanding, to prove they still possessed the fundamental knowledge and skills required to practice magic with precision and safety.> Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > Eryndor, like many of his fellow mages, had grown complacent in his mastery. He had mastered the intricacies of elemental magic, bending the winds, waters, and flames to his will. His wand, a family heirloom passed down through generations, had become an extension of his arm, channeling the power of magic with ease. Yet, despite his proficiency, the Council's decree was clear: Eryndor must return to Wysteria Academy for two years of reeducation.> The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > The journey was not an easy one. Packing his worn leather satchel with the few books and scrolls he deemed essential, Eryndor bid farewell to his loved ones in Brindlemark, leaving behind the familiarity of his home to embark on a journey that would take him to a place both wondrous and treacherous. At Wysteria Academy, he joined a cohort of students from all corners of the realm, each as skilled and eager as he once was. The relearning process was a grueling one, as if facing a mountain he had once conquered. New techniques were presented, new theories to unravel, and new expectations to meet. Eryndor, once a master, now sat at the feet of his teachers, absorbing the new knowledge like a parched traveler at an oasis.> Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > Among his fellow students, there was the young, fire-kissed mage, Elara, who wielded magic with an intensity that bordered on reckless abandon. Her fiery nature and quick wit often put her at odds with the strict, ancient traditions, earning her a certain notoriety among the academy's elder mages. Then there was the enigmatic, shadow-drawn Kael, whose mastery of the arcane was matched only by his mystery. Kael's affinity for the dark arts raised more than a few eyebrows among the faculty, but Eryndor, having once walked the fine line between light and shadow, saw something of himself in the young mage.> As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > As the years passed, Eryndor grew to appreciate the challenges and opportunities that came with his return to the academy. He found himself grappling with the nuances of magic anew, rekindling memories of his early days as a novice. The relearning process was as much about rediscovering himself as it was about mastering the arcane. His studies were a journey of self-discovery, one that tested the mettle of his will and the depths of his understanding.> Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > Upon completion of his studies, Eryndor stood before the Council once more, his wand in hand, his heart afire with the thrill of rediscovery. The Council's examination was not merely a test of his knowledge but a test of his character. Eryndor, like many of his peers, had grown complacent, but the rigors of relearning had rekindled a spark within him, a flame that would guide him through the trials ahead.> With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. > With his renewed license granted, Eryndor returned to Brindlemark, his home and his heart rejuvenated. His library, once a testament to his mastery, was now a canvas for the new knowledge he had acquired. His wand, now an extension of his rekindled passion for the arcane, channelled magic with a precision and flair that was unmatched. Eryndor Thorne, the elderly mage, had not only relearned magic but had rediscovered himself, a testament to the transformative power of education and the enduring spirit of the arcane. reply reply I asked it some basic questions and it fudged it like it was chatgpt 1.0 reply To the authors: do not self-deprecate your work. It is true this is not a frontier model (anymore) but the tech you've built is truly impressive. Very few hardware startups have a v1 as good as this one!Also, for many tasks I can think of, you don't really need the best of the best of the best, cheap and instant inference is a major selling point in itself. Also, for many tasks I can think of, you don't really need the best of the best of the best, cheap and instant inference is a major selling point in itself. reply Anyway VCs will dump money onto them, and we'll see if the approach can scale to bigger models soon. reply reply > The number "six" is actually a noun, not a number. However, I assume you're asking to write the number 7 as a numeral, which is: 7 reply reply reply reply I'm not sure how good llama 3.1 8b is for that, but it should work, right?Autocomplete models don't have to be very big, but they gotta be fast. Autocomplete models don't have to be very big, but they gotta be fast. reply 1. Generic, mask layers and board to handle what's common across models. Especially memory and interface.2. Specific layers for the model implementation.Masks are the most expensive part of ASIC design. So, keeping the custom part small with the rest pre-proven in silicon, even shared across companies, would drop the costs significantly. This is already done in hardware industry in many ways but not model acceleration.Then, do 8B, 30-40B, 70B, and 405B models in hardware. Make sure they're RLHF-tuned well since changes will be impossible or limited. Prompts will drive most useful functionality. Keep cranking out chips. There's maybe a chance to keep the weights changeable on-chip but it should still be useful if only inputs can change.The other concept is to use analog, neural networks with the analog layers on older, cheaper nodes. We only have to customize that per model. The rest is pre-built digital with standard interfaces on a modern node. Given the chips would be distributed, one might get away with 28nm for the shared part and develop it eith shuttle runs. 2. Specific layers for the model implementation.Masks are the most expensive part of ASIC design. So, keeping the custom part small with the rest pre-proven in silicon, even shared across companies, would drop the costs significantly. This is already done in hardware industry in many ways but not model acceleration.Then, do 8B, 30-40B, 70B, and 405B models in hardware. Make sure they're RLHF-tuned well since changes will be impossible or limited. Prompts will drive most useful functionality. Keep cranking out chips. There's maybe a chance to keep the weights changeable on-chip but it should still be useful if only inputs can change.The other concept is to use analog, neural networks with the analog layers on older, cheaper nodes. We only have to customize that per model. The rest is pre-built digital with standard interfaces on a modern node. Given the chips would be distributed, one might get away with 28nm for the shared part and develop it eith shuttle runs. Masks are the most expensive part of ASIC design. So, keeping the custom part small with the rest pre-proven in silicon, even shared across companies, would drop the costs significantly. This is already done in hardware industry in many ways but not model acceleration.Then, do 8B, 30-40B, 70B, and 405B models in hardware. Make sure they're RLHF-tuned well since changes will be impossible or limited. Prompts will drive most useful functionality. Keep cranking out chips. There's maybe a chance to keep the weights changeable on-chip but it should still be useful if only inputs can change.The other concept is to use analog, neural networks with the analog layers on older, cheaper nodes. We only have to customize that per model. The rest is pre-built digital with standard interfaces on a modern node. Given the chips would be distributed, one might get away with 28nm for the shared part and develop it eith shuttle runs. Then, do 8B, 30-40B, 70B, and 405B models in hardware. Make sure they're RLHF-tuned well since changes will be impossible or limited. Prompts will drive most useful functionality. Keep cranking out chips. There's maybe a chance to keep the weights changeable on-chip but it should still be useful if only inputs can change.The other concept is to use analog, neural networks with the analog layers on older, cheaper nodes. We only have to customize that per model. The rest is pre-built digital with standard interfaces on a modern node. Given the chips would be distributed, one might get away with 28nm for the shared part and develop it eith shuttle runs. The other concept is to use analog, neural networks with the analog layers on older, cheaper nodes. We only have to customize that per model. The rest is pre-built digital with standard interfaces on a modern node. Given the chips would be distributed, one might get away with 28nm for the shared part and develop it eith shuttle runs. reply reply : https://artificialanalysis.ai/models/llama-3-1-instruct-8b/p... reply reply reply reply > On the cost front, deploying modern models demands massive engineering and capital: room-sized supercomputers consuming hundreds of kilowatts…This is just wrong. The largest models are probably 1-2 trillion parameters. Say 2 trillion and let’s pretend it’s only quantized to 8bit (even though it could easily be half that.) So we need 2TB of VRAM. Not even using the latest hardware, lets say H100 chips with 80GB of vram each, with 8 of them in say an 8U. (Although you can certainly fit these in 6U still air cooled or even 4U water cooled.) Three of these server would almost do, but let’s call it four to include plenty of room for context. The largest physical size would be 32U - most of a single rack. Which is hardly the size of a room, even in Manhattan. Total power maybe 40kW. And you could easily drop these numbers to a half or quarter of that with reasonable modifications or upgrades.If you want to sell your hardware, start by being honest about the problem you’re addressing. This is just wrong. The largest models are probably 1-2 trillion parameters. Say 2 trillion and let’s pretend it’s only quantized to 8bit (even though it could easily be half that.) So we need 2TB of VRAM. Not even using the latest hardware, lets say H100 chips with 80GB of vram each, with 8 of them in say an 8U. (Although you can certainly fit these in 6U still air cooled or even 4U water cooled.) Three of these server would almost do, but let’s call it four to include plenty of room for context. The largest physical size would be 32U - most of a single rack. Which is hardly the size of a room, even in Manhattan. Total power maybe 40kW. And you could easily drop these numbers to a half or quarter of that with reasonable modifications or upgrades.If you want to sell your hardware, start by being honest about the problem you’re addressing. If you want to sell your hardware, start by being honest about the problem you’re addressing. reply The most impressive demo since gpt 3, honestly.Since we already have open source models that are plenty good, like the new kimi k2.5, all I need is the ability to run it at moderate speed.Honestly I am not bullish on capabilities that models do not yet have, seems we have seen it all and the only advancement have been context size.And honestly I would claim this is the market sentiment aswell, anthropic showed opus 4.6 first and the big release was actually sonnet, the model people would use routinely. Nobody gave a shit about Gemini 3.1 pro, 3.0 flash was very successful...Given all the recent developments in the last 12 months, no new use cases have opened for me. Given this insane speed, even on a limited model/context size, we would approach IA very differently. Since we already have open source models that are plenty good, like the new kimi k2.5, all I need is the ability to run it at moderate speed.Honestly I am not bullish on capabilities that models do not yet have, seems we have seen it all and the only advancement have been context size.And honestly I would claim this is the market sentiment aswell, anthropic showed opus 4.6 first and the big release was actually sonnet, the model people would use routinely. Nobody gave a shit about Gemini 3.1 pro, 3.0 flash was very successful...Given all the recent developments in the last 12 months, no new use cases have opened for me. Given this insane speed, even on a limited model/context size, we would approach IA very differently. Honestly I am not bullish on capabilities that models do not yet have, seems we have seen it all and the only advancement have been context size.And honestly I would claim this is the market sentiment aswell, anthropic showed opus 4.6 first and the big release was actually sonnet, the model people would use routinely. Nobody gave a shit about Gemini 3.1 pro, 3.0 flash was very successful...Given all the recent developments in the last 12 months, no new use cases have opened for me. Given this insane speed, even on a limited model/context size, we would approach IA very differently. And honestly I would claim this is the market sentiment aswell, anthropic showed opus 4.6 first and the big release was actually sonnet, the model people would use routinely. Nobody gave a shit about Gemini 3.1 pro, 3.0 flash was very successful...Given all the recent developments in the last 12 months, no new use cases have opened for me. Given this insane speed, even on a limited model/context size, we would approach IA very differently. Given all the recent developments in the last 12 months, no new use cases have opened for me. Given this insane speed, even on a limited model/context size, we would approach IA very differently. reply User You: What is the airspeed of an unladen swallow/Jimmy:A reference to the classic line from Monty Python and the Holy Grail!Thus far it is correct but here it comes:In the movie, a group of medieval scholars, the Knights who say "Ni!" (also known as the Bridgekeepers), ask a king's servant, a blacksmith, if an unladen swallow makes good time. The blacksmith, played by John Cleese, responds with the now-famous line:"And what's the airspeed velocity of an unladen swallow?"To which the blacksmith responds:"What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load.Nope, that is not at all how it went. No blacksmiths, not related to the Knights who say 'Ni!' - who have nothing scholarly about them, they wanted a shrubbery and demanded another one when their wish was fulfilled. It was the Bridgekeeper - not related to the aforementioned knights - who asked 3 questions and would throw anyone who could not answer all three into the chasm. When he asked Arthur - played by Cleese - the question about the airspeed of the swallow he asked the Bridgekeeper whether he meant an African or European swallow. The Bridgekeeper did not have an answer and was thrown into the chasm, problem solved.However, in reality, swallows are not typically "unladen" in flight. They are small birds that fly at relatively fast speeds, usually around 10-15 km/h (6-9 mph), but they are not usually carrying any burdens!Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. Jimmy:A reference to the classic line from Monty Python and the Holy Grail! A reference to the classic line from Monty Python and the Holy Grail! Thus far it is correct but here it comes:In the movie, a group of medieval scholars, the Knights who say "Ni!" (also known as the Bridgekeepers), ask a king's servant, a blacksmith, if an unladen swallow makes good time. The blacksmith, played by John Cleese, responds with the now-famous line:"And what's the airspeed velocity of an unladen swallow?"To which the blacksmith responds:"What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load.Nope, that is not at all how it went. No blacksmiths, not related to the Knights who say 'Ni!' - who have nothing scholarly about them, they wanted a shrubbery and demanded another one when their wish was fulfilled. It was the Bridgekeeper - not related to the aforementioned knights - who asked 3 questions and would throw anyone who could not answer all three into the chasm. When he asked Arthur - played by Cleese - the question about the airspeed of the swallow he asked the Bridgekeeper whether he meant an African or European swallow. The Bridgekeeper did not have an answer and was thrown into the chasm, problem solved.However, in reality, swallows are not typically "unladen" in flight. They are small birds that fly at relatively fast speeds, usually around 10-15 km/h (6-9 mph), but they are not usually carrying any burdens!Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. In the movie, a group of medieval scholars, the Knights who say "Ni!" (also known as the Bridgekeepers), ask a king's servant, a blacksmith, if an unladen swallow makes good time. The blacksmith, played by John Cleese, responds with the now-famous line:"And what's the airspeed velocity of an unladen swallow?"To which the blacksmith responds:"What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load.Nope, that is not at all how it went. No blacksmiths, not related to the Knights who say 'Ni!' - who have nothing scholarly about them, they wanted a shrubbery and demanded another one when their wish was fulfilled. It was the Bridgekeeper - not related to the aforementioned knights - who asked 3 questions and would throw anyone who could not answer all three into the chasm. When he asked Arthur - played by Cleese - the question about the airspeed of the swallow he asked the Bridgekeeper whether he meant an African or European swallow. The Bridgekeeper did not have an answer and was thrown into the chasm, problem solved.However, in reality, swallows are not typically "unladen" in flight. They are small birds that fly at relatively fast speeds, usually around 10-15 km/h (6-9 mph), but they are not usually carrying any burdens!Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. "And what's the airspeed velocity of an unladen swallow?"To which the blacksmith responds:"What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load. To which the blacksmith responds:"What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load. "What do you mean? An unladen swallow?"The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load. The blacksmith then explains that an unladen swallow is when a swallow (a type of bird) is flying in the air without carrying any burden, implying that it is moving at a speed that is not affected by carrying any weight or load. Nope, that is not at all how it went. No blacksmiths, not related to the Knights who say 'Ni!' - who have nothing scholarly about them, they wanted a shrubbery and demanded another one when their wish was fulfilled. It was the Bridgekeeper - not related to the aforementioned knights - who asked 3 questions and would throw anyone who could not answer all three into the chasm. When he asked Arthur - played by Cleese - the question about the airspeed of the swallow he asked the Bridgekeeper whether he meant an African or European swallow. The Bridgekeeper did not have an answer and was thrown into the chasm, problem solved.However, in reality, swallows are not typically "unladen" in flight. They are small birds that fly at relatively fast speeds, usually around 10-15 km/h (6-9 mph), but they are not usually carrying any burdens!Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. However, in reality, swallows are not typically "unladen" in flight. They are small birds that fly at relatively fast speeds, usually around 10-15 km/h (6-9 mph), but they are not usually carrying any burdens!Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. Needless LLM-blabber.The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. The "airspeed velocity of an unladen swallow" has become a meme and a cultural reference point, often used humorously or ironically to refer to situations where someone is trying to make an absurd or non-sensical argument or ask an absurd question.Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. Somewhat correct but not necessary in this context.The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. The correct answer to the question would have been Do you mean an African or European swallow? followed by a short reference to the movie.Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. Of course this demo is not about the accuracy of the model - 'an old Llama' as mentioned elsewhere in this thread - but it does show that speed isn't everything. For generating LLM-slop this hardware implementation probably offers an unbeatable price/performance ratio but it remains to be seen if it can be combined with larger and less hallucination-prone models. reply Congratulations! You figured out that this is a demo of a very small 8B model from 2022. reply reply reply reply An LLM's effective lifespan is a few months (ie the amount of time it is considered top-tier), it wouldn't make sense for a user to purchase something that would be superseded in a couple of months.An LLM hosting service however, where it would operate 24/7, would be able to make up for the investment. An LLM hosting service however, where it would operate 24/7, would be able to make up for the investment. reply I know it's not a resonating model, but I keep pushing it and eventually it gave me this as part of it's output888 + 88 + 88 + 8 + 8 = 1060, too high... 8888 + 8 = 10000, too high... 888 + 8 + 8 +ประก 8 = 1000,ประกI googled the strange symbol, it seems to mean Set in thai? 888 + 88 + 88 + 8 + 8 = 1060, too high... 8888 + 8 = 10000, too high... 888 + 8 + 8 +ประก 8 = 1000,ประกI googled the strange symbol, it seems to mean Set in thai? I googled the strange symbol, it seems to mean Set in thai? reply reply |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Facebook_Beacon] | [TOKENS: 1411] |
Contents Facebook Beacon Beacon formed part of Facebook's advertisement system that sent data from external websites to Facebook, for the purpose of allowing targeted advertisements and allowing users to share their activities with their friends. Beacon reported to Facebook on Facebook's members' activities on third-party sites that also participated with Beacon. These activities were published in users' News Feed. This occurred even when users were not connected to Facebook, and happened without the knowledge of the Facebook user. The service was controversial and became the target of a class-action lawsuit, resulting in it shutting down in September 2009. One of the main concerns was that Beacon did not give the user the option to block the information from being sent to Facebook. Beacon was launched on November 6, 2007, with 44 partner websites. Mark Zuckerberg, CEO of Facebook, characterized Beacon on the Facebook Blog in November 2011 as a "mistake." Although Beacon was unsuccessful, it did pave the way for Facebook Connect, which has become widely popular. Privacy concerns and litigation Beacon created considerable controversy soon after it was launched, due to privacy concerns. On November 20, 2007, civic action group MoveOn.org created a Facebook group and online petition demanding that Facebook not publish their activity from other websites without explicit permission from the user. In fewer than ten days, this group gained 50,000 members. After the class-action lawsuit, Lane v. Facebook, Inc., Beacon was changed to require that any actions transmitted to the website would have to be approved by the Facebook user before being published. On November 29, 2007, Stefan Berteau, a security researcher for Computer Associates, published a note on his tests of the Beacon system. He found that data was still being collected and sent to Facebook despite users' opt-outs and not being logged in to Facebook at the time. This revelation was in direct contradiction to the statements made by Chamath Palihapitiya, Facebook's vice president of marketing and operations, in an interview with The New York Times published the same day: Q. If I buy tickets on Fandango, and decline to publish the purchase to my friends on Facebook, does Facebook still receive the information about my purchase? A. "Absolutely not. One of the things we are still trying to do is dispel a lot of misinformation that is being propagated unnecessarily." On November 30, 2007, Louise Story of The New York Times blogged that not only had she received the impression that Beacon would be an explicit opt-in program, but that Coca-Cola had also had a similar impression, and as a result, had chosen to withdraw their participation in Beacon. On December 5, 2007, Facebook announced that it would allow people to opt-out of Beacon. Founder Mark Zuckerberg apologized for the controversy. This has been the philosophy behind our recent changes. Last week we changed Beacon to be an opt-in system, and today we're releasing a privacy control to turn off Beacon completely. You can find it here. If you select that you don't want to share some Beacon actions or if you turn off Beacon, then Facebook won't store those actions even when partners send them to Facebook. On September 21, 2009, Facebook announced that it would shut down the service. On October 23, 2009, a class action notice was sent to Facebook users who may have used Beacon. The proposed settlement would require Facebook to pay $9.5 million into a settlement fund. The named plaintiffs (approximately 20) would be awarded a total of $41,000, the remainder consisting of legal fees. Technology Facebook Beacon worked through the use of a 1x1 GIF web bug on the third-party site and Facebook cookies. Clearing Facebook cookies from the browser after explicitly logging off from Facebook prevented the third-party site from knowing a user's Facebook identity. Lawsuit and settlement As part of a class action settlement, Facebook terminated Beacon. Facebook was also required by a court order to notify its users of the settlement. Facebook set up a $6 million fund to establish an independent non-profit foundation that will identify and fund projects and initiatives that promote the cause of online privacy, safety, and security. Facebook also set up a website about the lawsuit. Under the contingency fee arrangement with the plaintiffs, the law firms that filed the case would get a fee, likely to be $3–$4 million, but the average Facebook user would receive no monetary award. Facebook notified its users about the court order. Facebook received intense criticism because of Beacon. The case was ended by a permanent termination of the system and an establishment of a Privacy Foundation. Before Beacon terminated, 19 people against Beacon organized a class action lawsuit. Settling the case, Facebook finally paid $9.5 million in total to resolve the privacy concerns around its users. It established a non-profit foundation called Digital Trust Foundation with $6.5 million, aiming to "fund and sponsor programs designed to educate users, regulators and enterprises regarding critical issues relating to the protection of identity and personal information online". Around $3 million was distributed to the original plaintiffs and attorneys. One of the class action organizers made an objection to the Supreme Court, arguing that members from the class action received little money from the settlement as a result of donating to a newly founded charity. The person also raised the issue that the settlement was unfair because Facebook still controlled the foundation since an employee from Facebook was in charge of it. In response to the challenge of the settlement, Facebook explained that direct payment would not be a wise decision compared to setting up a foundation. Each potential plaintiff would only share a tiny amount of money as it is being divided by a huge number of class action members. Thus, the money used in founding a relevant non-profit organization to educate people about privacy issues seemed to be a better-off deal serving the same interests. Significance Beacon, as "a recommendation from a trusted friend" referred by Mark Zuckerberg, raised ongoing concerns regarding user privacy on social media sites and outraged privacy advocates. Beacon hurt Facebook's reputation by violating its Software Engineering tenets and disrespecting the privacy rights of its users. Since the failure of launching Beacon, Facebook has been mired in controversy in terms of privacy issues. The Beacon stories led many Internet Surfers to believe that "Facebook and other profit-oriented social networking sites are large Internet-based surveillance machines." In general, Beacon was viewed as a mistake because it appeared to be too explicit about the intentions inscribed in its protocol. By learning from its unsuccessful experience, Facebook has been seeking other ways to monetize its user database through social advertising. Unlike Beacon, the process of commercialization tends to happen in the back-end system thus becoming invisible to the users. In this way, a lot of resistance from the user population could be largely removed. To be more specific, some argue that Facebook Beacon had paved the way for its subsequent service, Facebook Connect, both adopting the idea of utilizing third-party data. See also References External links |
======================================== |
[SOURCE: https://www.theverge.com/policy/880850/social-media-lawsuits-meta-facebook-instagram-tiktok] | [TOKENS: 3678] |
PolicyUpdated Feb 20, 2026, 5:11 PM UTCSocial media on trial: tech giants face lawsuits over addiction, safety, and mental healthby Stevie BonifieldLinkShareRSSGiftA set of bellwether cases alleging that social media platforms harmed teens’ safety and mental health is going to trial this year, putting executives like Meta CEO Mark Zuckerberg on the stand to answer questions about what they’ve done or not done to protect kids.Unlike many earlier legal challenges against social media companies, these cases managed to overcome the companies’ attempts to get them dismissed based on objections citing Section 230, a law that protects online platforms from being held liable for their users’ speech. They accuse companies like Meta, Snap, TikTok, and Google-owned YouTube of designing their platforms in ways that, the plaintiffs claim, they knew could contribute to addiction, depression, and anxiety.Follow along below for all of the latest updates from the trials we’re currently following.Feb 20Stevie BonifieldSmart glasses in court are a privacy nightmarePhoto by Vjeran Pavic / The VergeWhen Meta CEO Mark Zuckerberg arrived at a Los Angeles courthouse on Wednesday, he did so with a team that appeared to be wearing Meta’s camera-equipped Ray-Ban smart glasses. Judge Carolyn Kuhl was concerned. According to CNBC, Kuhl warned anyone recording with the glasses, “If you have done that, you must delete that, or you will be held in contempt of the court.” Kuhl also ordered everyone wearing AI smart glasses to remove them. Even after the warning, at least one person was seen wearing the glasses around jurors in a courthouse hallway, although plaintiff attorney Rachel Lanier was told the glasses weren’t recording at the time.Glasses with recording capabilities have sparked concerns about privacy, surveillance, and doxxing in all kinds of places, and the courtroom is no exception. Earlier this month, a user on the r/legaladvice subreddit shared a post asking for advice on reporting a plaintiff wearing Meta’s glasses to court. Additionally, over recent months, a few states have moved to specifically ban smart glasses from courthouses, including the US District Courts for the District of Hawaii and the Western District of Wisconsin. The Forsyth County Court in North Carolina also banned smart glasses last year. Colorado’s District Court is considering a ban as well.Read Article >Feb 19Lauren FeinerThe executive that helped build Meta’s ad machine is trying to expose itImage: Cath Virginia / The Verge, Getty ImagesBrian Boland spent more than a decade figuring out how to build a system that would make Meta money. On Thursday, he told a California jury it incentivized drawing more and more users, including teens, onto Facebook and Instagram — despite the risks.Boland’s testimony came a day after Meta CEO Mark Zuckerberg took the stand in a case over whether Meta and YouTube are liable for allegedly harming a young woman’s mental health. Zuckerberg framed Meta’s mission as balancing safety with free expression, not revenue. Boland’s role was to counter this by explaining how Meta makes money, and how that shaped its platforms’ design. Boland testified that Zuckerberg fostered a culture that prioritized growth and profit over users’ wellbeing from the top down. He said he’s been described as a whistleblower — a term Meta has broadly sought to limit for fear it would prejudice the jury, but which the judge has generally allowed. Over his 11 years at Meta, Boland said he went from having “deep blind faith” in the company to coming to the “firm belief that competition and power and growth were the things that Mark Zuckerberg cared about most.”Read Article >Feb 19Lauren FeinerSomeone was still wearing Meta’s Ray-Bans in the courthouse after a judge warned against it.Plaintiff attorney Rachel Lanier told Judge Carolyn Kuhl this morning that after she’d admonished against using smart glasses in the courthouse, they learned that one person was still wearing them in the hallway where jurors were present. After alerting Meta’s counsel, Lanier said they were told the glasses weren’t recording.Feb 19Lauren FeinerMark Zuckerberg and his Ray-Ban entourage have their day in courtImage: Cath Virginia / The Verge, Getty ImagesMeta CEO Mark Zuckerberg entered a downtown Los Angeles courthouse in largely the same way as all the attorneys, reporters, and advocates who’d come to watch his landmark trial testimony, but with one notable difference: he was flanked by an entourage that appeared to be wearing Meta’s Ray-Ban smart glasses. To get to the courtroom, he walked past a crowd of parents whose children died after struggling with issues they attribute to the design of social media platforms including those that Meta makes. He would spend the next eight hours often answering questions in his signature matter-of-fact (or less charitably, monotone) cadence, denying his platform was liable for the harms.Zuckerberg was questioned through the morning session by Mark Lanier, the lead litigator for plaintiff K.G.M. She’s a 20-year-old woman who claims Meta and Google’s design features encouraged her to compulsively use their apps and led to mental health issues, which the companies generally deny. Lanier’s charismatic style, drawing from his other profession as a pastor, was in stark contrast to Zuckerberg’s responses on the witness stand, where he tried to inject nuance into how employees discussed — and sometimes criticized — various safety decisions. At times, Zuckerberg pushed back on Lanier’s characterization of his testimony. “That’s not what I’m saying at all,” he said at one point, according to NPR. Meanwhile, the judge admonished people in the courtroom not to wear Meta’s AI glasses, and that they could be held in contempt of court if they fail to delete any recordings; parents whose children died after experiencing harms they attribute to his platform watched on.Read Article >Feb 18Lauren FeinerZuckerberg enters the courthouse to testify about safety on Instagram.The Meta CEO walked through the public entrance of the LA Superior Court and past parent advocates and media waiting to learn if they’d get a seat to hear his testimony.Feb 18Lauren FeinerMark Zuckerberg is taking the stand as social media goes on trialMark Zuckerberg. Image: Cath Virginia / The Verge, Getty ImagesLori Schott didn’t care what it took to haul her way from her small town in Eastern Colorado to show up to a Los Angeles courtroom where Meta CEO Mark Zuckerberg is set to testify Wednesday. “I don’t care if I had to hire a pack mule to get me here, I was going to be here,” she told The Verge outside the courthouse Tuesday.Schott’s daughter Annalee died by suicide at age 18 in 2020, after struggling with body image issues that her mother says were heightened by social media. After her death, Schott found journal entries where Annalee disparaged her own looks and compared herself to other girls’ profiles. “I was so worried about what my child was putting out online, I didn’t realize what she was receiving,” Schott said.Read Article >Feb 17Lauren FeinerThe social media addiction trial is delayed — again.Just after we entered the courtroom, we learned that a juror has been hospitalized. The parties decided to postpone today’s testimony from former Meta employees to see if the juror can return. Regardless, Meta CEO Mark Zuckerberg is expected to testify tomorrow — either before the original juror, or an alternate.Jan 26Lauren FeinerInternal chats show how social media companies discussed teen engagementCath Virginia / The Verge | Photos from Getty ImagesRecently released documents show the big business opportunity that social media companies saw in recruiting teens to their platforms and how they discussed risks that heavy digital engagement could pose.The documents were released last week as part of a major set of trials brought by school districts, state attorneys general, and others against Meta, Snap, TikTok and YouTube, alleging the design of their products harmed young users. The Tech Oversight Project, which advocates for more regulations on tech platforms to safeguard teens online, compiled a report on the newly released documents, which were independently reviewed by The Verge. On Monday, a federal judge will hear arguments that will determine the scope of the trials, the first of which kicks off in June.Read Article >Jan 26Lauren Feiner2026 is the year of social media’s legal reckoningImage: Cath Virginia / The Verge, Getty ImagesOver the next year, top social media executives are expected to take the witness stand to defend their companies from an avalanche of claims about their failure to protect kids.The first of many trials is set to begin this month in California, when Meta, TikTok, and YouTube face claims that tech addiction harmed a teenager’s mental health. This case is just the tip of the legal iceberg. It kicks off one of two sets of so-called bellwether trials, where judges will hear a selection of cases that represent similar claims, whose outcomes will likely inform settlement amounts for the remaining cases. There are still thousands more cases behind them.Read Article >Most PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Social media on trial: tech giants face lawsuits over addiction, safety, and mental health A set of bellwether cases alleging that social media platforms harmed teens’ safety and mental health is going to trial this year, putting executives like Meta CEO Mark Zuckerberg on the stand to answer questions about what they’ve done or not done to protect kids. Unlike many earlier legal challenges against social media companies, these cases managed to overcome the companies’ attempts to get them dismissed based on objections citing Section 230, a law that protects online platforms from being held liable for their users’ speech. They accuse companies like Meta, Snap, TikTok, and Google-owned YouTube of designing their platforms in ways that, the plaintiffs claim, they knew could contribute to addiction, depression, and anxiety. Follow along below for all of the latest updates from the trials we’re currently following. Stevie Bonifield Smart glasses in court are a privacy nightmare When Meta CEO Mark Zuckerberg arrived at a Los Angeles courthouse on Wednesday, he did so with a team that appeared to be wearing Meta’s camera-equipped Ray-Ban smart glasses. Judge Carolyn Kuhl was concerned. According to CNBC, Kuhl warned anyone recording with the glasses, “If you have done that, you must delete that, or you will be held in contempt of the court.” Kuhl also ordered everyone wearing AI smart glasses to remove them. Even after the warning, at least one person was seen wearing the glasses around jurors in a courthouse hallway, although plaintiff attorney Rachel Lanier was told the glasses weren’t recording at the time. Glasses with recording capabilities have sparked concerns about privacy, surveillance, and doxxing in all kinds of places, and the courtroom is no exception. Earlier this month, a user on the r/legaladvice subreddit shared a post asking for advice on reporting a plaintiff wearing Meta’s glasses to court. Additionally, over recent months, a few states have moved to specifically ban smart glasses from courthouses, including the US District Courts for the District of Hawaii and the Western District of Wisconsin. The Forsyth County Court in North Carolina also banned smart glasses last year. Colorado’s District Court is considering a ban as well. Lauren Feiner The executive that helped build Meta’s ad machine is trying to expose it Brian Boland spent more than a decade figuring out how to build a system that would make Meta money. On Thursday, he told a California jury it incentivized drawing more and more users, including teens, onto Facebook and Instagram — despite the risks. Boland’s testimony came a day after Meta CEO Mark Zuckerberg took the stand in a case over whether Meta and YouTube are liable for allegedly harming a young woman’s mental health. Zuckerberg framed Meta’s mission as balancing safety with free expression, not revenue. Boland’s role was to counter this by explaining how Meta makes money, and how that shaped its platforms’ design. Boland testified that Zuckerberg fostered a culture that prioritized growth and profit over users’ wellbeing from the top down. He said he’s been described as a whistleblower — a term Meta has broadly sought to limit for fear it would prejudice the jury, but which the judge has generally allowed. Over his 11 years at Meta, Boland said he went from having “deep blind faith” in the company to coming to the “firm belief that competition and power and growth were the things that Mark Zuckerberg cared about most.” Lauren Feiner Plaintiff attorney Rachel Lanier told Judge Carolyn Kuhl this morning that after she’d admonished against using smart glasses in the courthouse, they learned that one person was still wearing them in the hallway where jurors were present. After alerting Meta’s counsel, Lanier said they were told the glasses weren’t recording. Lauren Feiner Mark Zuckerberg and his Ray-Ban entourage have their day in court Meta CEO Mark Zuckerberg entered a downtown Los Angeles courthouse in largely the same way as all the attorneys, reporters, and advocates who’d come to watch his landmark trial testimony, but with one notable difference: he was flanked by an entourage that appeared to be wearing Meta’s Ray-Ban smart glasses. To get to the courtroom, he walked past a crowd of parents whose children died after struggling with issues they attribute to the design of social media platforms including those that Meta makes. He would spend the next eight hours often answering questions in his signature matter-of-fact (or less charitably, monotone) cadence, denying his platform was liable for the harms. Zuckerberg was questioned through the morning session by Mark Lanier, the lead litigator for plaintiff K.G.M. She’s a 20-year-old woman who claims Meta and Google’s design features encouraged her to compulsively use their apps and led to mental health issues, which the companies generally deny. Lanier’s charismatic style, drawing from his other profession as a pastor, was in stark contrast to Zuckerberg’s responses on the witness stand, where he tried to inject nuance into how employees discussed — and sometimes criticized — various safety decisions. At times, Zuckerberg pushed back on Lanier’s characterization of his testimony. “That’s not what I’m saying at all,” he said at one point, according to NPR. Meanwhile, the judge admonished people in the courtroom not to wear Meta’s AI glasses, and that they could be held in contempt of court if they fail to delete any recordings; parents whose children died after experiencing harms they attribute to his platform watched on. Lauren Feiner The Meta CEO walked through the public entrance of the LA Superior Court and past parent advocates and media waiting to learn if they’d get a seat to hear his testimony. Lauren Feiner Mark Zuckerberg is taking the stand as social media goes on trial Lori Schott didn’t care what it took to haul her way from her small town in Eastern Colorado to show up to a Los Angeles courtroom where Meta CEO Mark Zuckerberg is set to testify Wednesday. “I don’t care if I had to hire a pack mule to get me here, I was going to be here,” she told The Verge outside the courthouse Tuesday. Schott’s daughter Annalee died by suicide at age 18 in 2020, after struggling with body image issues that her mother says were heightened by social media. After her death, Schott found journal entries where Annalee disparaged her own looks and compared herself to other girls’ profiles. “I was so worried about what my child was putting out online, I didn’t realize what she was receiving,” Schott said. Lauren Feiner Just after we entered the courtroom, we learned that a juror has been hospitalized. The parties decided to postpone today’s testimony from former Meta employees to see if the juror can return. Regardless, Meta CEO Mark Zuckerberg is expected to testify tomorrow — either before the original juror, or an alternate. Lauren Feiner Internal chats show how social media companies discussed teen engagement Recently released documents show the big business opportunity that social media companies saw in recruiting teens to their platforms and how they discussed risks that heavy digital engagement could pose. The documents were released last week as part of a major set of trials brought by school districts, state attorneys general, and others against Meta, Snap, TikTok and YouTube, alleging the design of their products harmed young users. The Tech Oversight Project, which advocates for more regulations on tech platforms to safeguard teens online, compiled a report on the newly released documents, which were independently reviewed by The Verge. On Monday, a federal judge will hear arguments that will determine the scope of the trials, the first of which kicks off in June. Lauren Feiner 2026 is the year of social media’s legal reckoning Over the next year, top social media executives are expected to take the witness stand to defend their companies from an avalanche of claims about their failure to protect kids. The first of many trials is set to begin this month in California, when Meta, TikTok, and YouTube face claims that tech addiction harmed a teenager’s mental health. This case is just the tip of the legal iceberg. It kicks off one of two sets of so-called bellwether trials, where judges will hear a selection of cases that represent similar claims, whose outcomes will likely inform settlement amounts for the remaining cases. There are still thousands more cases behind them. Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Travelogues_of_Palestine] | [TOKENS: 2052] |
Contents Travelogues of Palestine Travelogues of Palestine are the written descriptions of the region of Palestine by travellers, particularly prior to the 20th century. The works are important sources in the study of the history of Palestine and of Israel. Surveys of the geographical literature on Palestine were published by Edward Robinson in 1841, Titus Tobler in 1867 and subsequently by Reinhold Röhricht in 1890. Röhricht catalogued 177 works between 333 and 1300 CE, 19 works in the 14th century, 279 works in the 15th century, 333 works in the 16th century, 390 works in the 17th century, 318 works in the 18th century and 1,915 works in the 19th century. Important early works include the early Christian pilgrim accounts Itinerarium Burdigalense, Itinerarium Egeriae and Itinerarium Placentinum. Notable Islamic works include the 11th century Safarnama of Nasir Khusraw, the 14th century Travels of Ibn Battuta, and the 17th century Seyahatnâme by Evliya Çelebi. Notable Jewish works include the itineraries of Benjamin of Tudela and Petachiah of Regensburg. The number of published travelogues proliferated during the 19th century, and these travelers' impressions of 19th-century Palestine have been often quoted in the history and historiography of the region, although their accuracy and impartiality has been called into question in modern times. In total, there are more than 3,000 books and other materials detailing accounts of the journeys of primarily European and North American travelers to Ottoman Palestine. Notable 19th century works include Biblical Researches in Palestine by Edward Robinson, The Land and the Book by William McClure Thomson, and The Historical Geography of the Holy Land by George Adam Smith. List of pre-20th century travelogues The below list is ordered chronologically by years of travel, also indicating first publication, and/or edition available online. The list currently includes less than 200 works, out of an estimated 3,500 known works published prior to the 20th century. Bold titles have an existing Wikipedia article. "Appendix III. Tobler's Résumé of Works on Palestine". Text. T. & T. Clark. 1866. pp. 391–409. Wright, Thomas (1848). Text. Henry G. Bohn. Text (in French). Mme Ve J. Renouard. 1871. Text. Vol. 6. Translated by Conder. Palestine Pilgrims' Text Society. 1896. Lazar, Moshé; Arbesú, David, eds. (1965). Text. Text. Ashgate Pub. 2011. ISBN 978-1-4094-3607-2. OCLC 785151012. Text (in German). T. G. Meissner. 1859. Text. Nolte & Köhler. 1857. Text. Huber. 1851. J. C. M. Laurent, ed. (1864). Text (in Latin). J. C. Hinrichs. J. C. M. Laurent, ed. (1873). Text (in Latin) (2 ed.). J. C. Hinrichs. Text. Vol. 12. Translated by Aubrey Stewart. Palestine Pilgrims' Text Society. 1896. Text. Ashgate Pub. 2011. ISBN 978-1-4094-3607-2. OCLC 785151012. Text. Ashgate Pub. 2011. ISBN 978-1-4094-3607-2. OCLC 785151012. Text (in Latin). J. C. Hinrichs. 1864. pp. 102ff. Text. Ashgate Pub. 2011. ISBN 978-1-4094-3607-2. OCLC 785151012. Brocquière, Bertrandon de La (1807). Text. Translated by Thomas Johnes. Wright, Thomas (1848). Text. Henry G. Bohn. Text. 1857. Mitchell, R.J., ed. (1964). Text. Murray. Mitchell, R.J., ed. (1964). Text. Murray. Mitchell, R.J., ed. (1964). Text. Murray. Vol. I, Part I. 1896. Vol. I, Part II. 1896. Vol. II, Part I. 1893. Vol. II, Part II. 1893. "Text". Mediterranean Historical Review. 32 (2): 153–188. 2017. doi:10.1080/09518967.2017.1396769. hdl:1887/69486. S2CID 220378469. Zuallardo, Giovanni (1595). Text. Anson, George Baron (1853). Text. Text. Vol. 1. 1743. Text (2 ed.). Joh. Christian Grunert. 1743. Text. 1744. pp. 339–. Text. L. Davis and C. Reymers. 1766. Volume 1. 1759. Volume 2. 1759. Text. 1783. Text. Vol. 1. printed for G.G.J. and J. Robinson. 1788. Text. Vol. 2. 1788. Tott, François Baron de (1786). Text (2 ed.). Text. Vol. 1. Printed for P. Byrne. 1792. Text. 1799. Text. 1803. Text. 1821. Text. 1812. Text. 1810. Text (in German). Berlin, G. Reimer. 1854. Clarke, Edward Daniel (1813). Text. Text. 1818. Text. 1822. Text. J. Murray. 1819. Text. 1823. Text. 1821. Turner, William (1820). Text. Text. R. Phillips. 1822. Text. 1821. Text. 1825. J. P. G. Ewers, ed. (1822). Text (in German). Text. 1822. Text. 1823. Text (2 ed.). 1823. Text. 1824. Text. Vol. 2. 1829. Text. John Harris. 1825. Text. Vol. 1. H. Colburn. 1826. Text. Vol. 2. H. Colburn. 1826. Text. 1828. Text. 1828. Text. 1830. Text. Vol. 1. 1835. Text. 1835. Text. R. Bentley. 1836. Text. Van Nostrand and Dwight. 1836. Text. John Murray. 1836. Text. H. Colburn. 1837. Text. 1837. Text. 1837. Text. 1838. Text. 1838. Text. A.T. Skillman. 1839. Text. Privately published. 1841. Text. Vol. 1. Boston, Crocker & Brewster. 1841. Text. Vol. 1. 1856. Text. 1856. Text. 1845. Text. 1844. Text. 1844. Text. 1845. Text. Vol. 1. 1845. Text. 1847. Text. Vol. 1. 1855. Text. 1849. Text. 1850. Text. Vol. 1. 1854. Text. New York, Carter and brothers. 1851. Text. Vol. 1. Colburn. 1851. Text. 1853. Text. 1877. Elizabeth A. Finn, ed. (1878). Text. Text (in German). Vol. 1. Berlin: G. Reimer. 1853. Text (in German). 1867. Text. Vol. 1. William Blackwood and son. 1854. Text. 1855. Text. Vol. 2. 1856. Text. 1857. Text. 1857. Text. 1858. Text. 1858. Text. 1858. Text. 1858. Text (in German). 1859. Text. 1859. Text. Vol. 1. 1859. Text. Vol. 2. 1859. Text (5 ed.). John Murray. 1860. Text. Printed by J. Lovell. 1861. Text. 1862. Text. 1863. Text. 1865. Text. T. & T. Clark. 1866. Text 1869 Text. T. Nelson. 1867. Text. 1868. Text. 1868. Text. W. Oliphant. 1868. Text (in French). 1868. Text (in French). 1868. Text. 1869. Text. H. S. King & Co. 1875. Text. 1886. Text. D. Appleton. 1881. Text. 1887. Text. 1889. Text (in French). 1894. Text (in French). Paris, E. Leroux. 1896. 20th century See also References Sources |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Arthur_Hays_Sulzberger] | [TOKENS: 1123] |
Contents Arthur Hays Sulzberger Arthur Hays Sulzberger (September 12, 1891 – December 11, 1968) was publisher of The New York Times from 1935 to 1961. During that time, daily circulation rose from 465,000 to 713,000 and Sunday circulation from 745,000 to 1.4 million; the staff more than doubled, reaching 5,200; advertising linage grew from 19 million to 62 million column inches per year; and gross income increased almost sevenfold, reaching $117 million.[citation needed] Early life and education Sulzberger was born in New York City, on September 12, 1891. His parents were Cyrus Leopold Sulzberger, a cotton-goods merchant, and Rachel Peixotto Hays. They came from old Jewish families, Ashkenazi and Sephardic, respectively. His great-great-grandfather, Benjamin Seixas, brother of the famous rabbi and American Revolutionary Gershom Mendes Seixas of Congregation Shearith Israel, was one of the founders of the New York Stock Exchange. His great-grandfather, Dr. Daniel Levy Maduro Peixotto, was a prominent physician and director of Columbia University's Medical College and a member of the Philolexian Society. His great-granduncle was Jacob Hays, the High Constable of New York from 1801 to 1850. In 1909, Sulzberger graduated from Horace Mann School. In 1913, he graduated from Columbia College. Career In 1918, Sulzberger began working at The New York Times, and became publisher when his father-in-law, Adolph Ochs, the previous Times publisher, died in 1935. In 1929, he founded Columbia's original Jewish Advisory Board and served on the board of what became Columbia-Barnard Hillel for many years. He served as a university trustee from 1944 to 1959 and is honored with a floor at the journalism school. He also served as a trustee of the Rockefeller Foundation from 1939 to 1957. He was elected a Fellow of the American Academy of Arts and Sciences in 1950. In 1954, Sulzberger received The Hundred Year Association of New York's Gold Medal Award "in recognition of outstanding contributions to the City of New York." Carl Bernstein's 1977 exposé of the media's CIA ties noted that "Sulzberger signed a secrecy agreement with the CIA in the 1950s, according to CIA officials—a fact confirmed by his nephew, C. L. Sulzberger." In 1956, Sulzberger received the Elijah Parish Lovejoy Award as well as an honorary Doctor of Laws degree from Colby College. In 1961, he was succeeded as publisher first by son-in-law Orvil Dryfoos, then, two years later in 1963, by his son Arthur Ochs "Punch" Sulzberger. Sulzberger broadened the Times' use of background reporting, pictures, and feature articles, and expanded its sections. He supervised the development of facsimile transmission for photographs and built the Times radio station, WQXR, into a leading vehicle for news and music. Under Sulzberger the Times began to publish editions in Paris and Los Angeles with remote-control typesetting machines. Sulzberger is credited with the quote: "We journalists tell the public which way the cat is jumping. The public will take care of the cat." Anti-Zionism A practicing Reform Jew, Sulzberger was an enthusiastic supporter of the American Council for Judaism, founded in June 1942 to oppose Zionism, giving it prominent coverage in his newspaper. In a 1946 speech, Sulzberger stated that Zionism was to blame for some of the Jewish deaths in the Holocaust, and that the refugee crisis during the war had been "a manageable, social and economic problem" until "the clamor for statehood introduced an insoluble political element" into the issue. He added it was his judgment that "thousands dead might now be alive" had Zionists put "less emphasis on statehood". His stand against Zionism and a Jewish state of Israel on principle has been accused by Laurel Leff in her 2005 book Buried by the Times of deliberately burying accounts of Nazi atrocities against Jews in the back pages of the Times. She alleges that Sulzberger went out of his way to play down the special victimhood of Jews and withheld support for specific rescue programs for European Jews. Personal life In 1917, he married Iphigene Bertha Ochs, the daughter of Adolph Ochs and Effie Wise (a daughter of Rabbi Isaac Mayer Wise). They had four children: Marian Sulzberger Heiskell (1918–2019), widow first of Times publisher Orvil Dryfoos and then of Time Inc. chairman Andrew Heiskell; Ruth Sulzberger Holmberg (1921–2017), publisher of the Chattanooga Times, married and divorced from Ben Hale Golden; Judith Sulzberger (1923–2011), physician, married Matthew Rosenschein Jr. (divorced), Dick Cohen (divorced), and Budd Levinson; and Arthur Ochs Sulzberger (1926–2012). Death Sulzberger died December 11, 1968, in New York City. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-81] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-311] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.