text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-FOOTNOTEStaal198863-23] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-9] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-23] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://github.com/resources/events] | [TOKENS: 748] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Events & Webinars Discover upcoming GitHub events, webinars, and conferences. Connect with developers, explore new tools, and learn how to build, secure, and scale software with GitHub. Filters Region Type Topic As development accelerates, driven by AI like GitHub Copilot, the need for security that moves at the speed of code is more critical than ever.San FransiscoMarch 23-26 Join us one day before the Microsoft AI Tour in London for an exclusive in-depth workshop to unlock the next generation of collaborative coding with GitHub Copilot.London, UKFebruary 23, 2026 | 12:00PM Join GitHub at the Microsoft AI Tour in London for a free, one-day event where you'll have the chance to learn directly from top experts, get hands-on with the latest AI technologies and meet with the GitHub team at our booth!London, UKFebruary 24, 2026 Learn about the latest AppSec trends and bring your questions! This is an open format designed to explore your questions and use cases. Happening monthly and virtually on EMEA time-zone. 2025 was the year agents went from buzzword to battle-tested, this talk shows how top teams are using them to ship faster, cut technical debt, and what’s coming in 2026.Feb 26. 2026 GitHub Connect is designed for technology champions and leaders focused on accelerating AI adoption, driving enterprise transformation, and safeguarding their most critical code assets.Toronto, ONMarch 5, 2026 Join GitHub at the Microsoft AI Tour in Washington D.C. for a free, one-day event where you’ll have the chance to learn directly from the top experts, get hands-on with the latest AI technologies and meet with the GitHub team at our booth March 10, 202Washington D.C. Join GitHub at the Microsoft AI Tour in Paris for a free, one-day event where you'll have the chance to learn directly from top experts, get hands-on with the latest AI technologies and meet with the GitHub team at our booth!Paris, FRMarch 11, 2026 Join us at the Microsoft AI Tour Seoul, a one-day event for business leaders, developers, and IT professionals to explore the latest in AI and shape future strategy.Seoul. KoreaMarch 26, 2026 | 7am - 5:45pm KST GitHub Copilot offers many ways to customize how it works, and this session explains when to use instructions, custom agents, prompts, and Skills, with demos to help you get the most out of Copilot.March 26, 2026 At Google Cloud Next, learn why GitHub is the platform for agentic software development: open, integrated, and built for the way developers actually work.Las VegasApril 22-24, 2026 Join us at the Microsoft AI Tour Sydney, a one-day event for business leaders, developers, and IT professionals to explore the latest in AI and shape future strategy.Sydney, AustraliaApril 23, 2026 | 7am - 5:45pm AEST Join us at the Microsoft AI Tour Zurich a one-day event for business leaders, developers, and IT professionals to explore the latest in AI and shape future strategy.Zurich, SwitzerlandApril 29, 2026 | 7am - 5:45pm CEST Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ancient_astronauts] | [TOKENS: 6651] |
Contents Ancient astronauts Ancient astronauts (or ancient aliens) refers to a pseudoscientific set of beliefs, also called paleocontact, that hold that intelligent extraterrestrial beings (alien astronauts) visited Earth and made contact with humans in antiquity and prehistoric times. Proponents of the theory suggest that this contact influenced the development of modern cultures, technologies, religions, and human biology. A common position is that deities from most (if not all) religions are extraterrestrial in origin, and that advanced technologies brought to Earth by ancient astronauts were interpreted as evidence of divine status[a] by early humans. The idea that ancient astronauts existed and visited Earth is not taken seriously by academics and archaeologists, who identify such claims as pseudoarchaeological or unscientific. It has received no credible attention in peer-reviewed studies. When proponents of the idea present evidence in favor of their beliefs, it is often distorted or fabricated. Some authors and scholars also argue that ancient astronaut theories have racist undertones or implications, diminishing the accomplishments and capabilities of indigenous cultures. Well-known proponents of these beliefs in the latter half of the 20th century who have written numerous books or appear regularly in mass media include Robert Charroux, Jacques Bergier, Jean Sendy, Erich von Däniken, Alexander Kazantsev, Zecharia Sitchin, Robert K. G. Temple, Giorgio A. Tsoukalos, David Hatcher Childress, Peter Kolosimo, and Mauro Biglino. Overview Various terms are used to reference claims about ancient astronauts, such as ancient aliens, ancient ufonauts, ancient space pilots, paleocontact, astronaut- or alien gods, or paleo- or biblical-SETI (search for extraterrestrial intelligence). Believers in such ancient astronaut stories often maintain that all or some humans are either descendants or creations of extraterrestrial intelligence who landed on Earth at some point in the ancient past. An associated idea is that human knowledge, religion, and culture came from extraterrestrial visitors in ancient times, in that ancient astronauts acted as a "mother culture". Additionally, proponents often claim that travelers from outer space built many of the structures on Earth (such as Egyptian pyramids and the Moai stone heads of Easter Island) or aided humans in building them. Proponents contend that the evidence for ancient astronauts comes from documentary gaps in historical and archaeological records while citing archaeological artifacts that they believe, contrary to the mainstream explanations, are anachronistic and supposedly beyond the technical capabilities of the people who made them. These are sometimes referred to as "out-of-place artifacts"; and include artwork and legends which believers reinterpret to fit stories of extraterrestrial contact or technologies. As a pseudoarcheology, the idea receives notice in fringe pulp media, such as the History Channel series Ancient Aliens. Such shows use a strategy known as 'fire-hosing' to co-mingle fact with fiction in order to spread theories of an alternate past with tropes that follow white supremacist, nativist, imperialist, settler-colonial, and Christian Identity beliefs relevant to the past. The celebrity proponents of ancient aliens profess to be a part of an oppressed minority of academics that 'big archaeology' is conspiring to disenfranchise while their identity of being a maverick or a rogue aligns with the individuals' lack of credentials. Like archaeological endeavors of the criticized past, these proponents focus primarily on monumental archaeological structures claiming they could have only been constructed with extraterrestrial intervention. The implication is that the non-white Indigenous people in the regions in which these monuments appear could not have built them on their own. However, Dakota/Lakota Sioux writer Ruth H. Burns, in Atmos magazine, counters that ancient alien theory and the idea of extraterrestrials in general supports the viewpoints of indigenous, non-European peoples. She believes the denial of extraterrestrial encounters and indigenous peoples' stories tracing their origins to extraterrestrials is part of "Indigenous erasure," as it minimizes or completely discounts the viewpoints of indigenous peoples. Many indigenous peoples trace their ancestry to "star-people" or the like—extraterrestrials who as the progenitors of indigenous peoples cannot by definition be white or "Aryan." A common feature in the stories portray the aliens as light-skinned or Aryan in complexion, as prominent alien astronaut proponent Erich von Däniken claims in his foundational work Chariots of the Gods? Some ancient astronaut proponents are thus associated with white supremacism, although their theories are sometimes applied to European cultures as well. Archaeologists have ignored the existence of these outlandish claims. However due to rising popular belief in fringe theories, they began actively engaging with the public via social media around 2020 to advocate mainstream archeological views. The few dedicated popular science explainers and skeptics who did offer opinions on the ideas universally panned them. For example, Carl Sagan wrote, "In the long litany of 'ancient astronaut' pop archaeology, the cases of apparent interest have perfectly reasonable alternative explanations, or have been misreported, or are simple prevarications, hoaxes and distortions". History of ancient aliens beliefs and their proponents Paleocontact or "ancient astronaut" narratives first appeared in the early science fiction of the late 19th and early 20th centuries, including the 1898 novel Edison's Conquest of Mars and the works of H.P. Lovecraft. The idea was proposed in earnest by journalist Harold T. Wilkins in 1954. It grew in popularity in the 1960s, mainly due to the Space Race and the success of Erich von Däniken's works, although it also received limited consideration as a serious hypothesis. Critics emerged throughout the 1970s, discrediting Von Däniken's claims. Ufologists separated the idea from the UFO controversy. By the early 1980s little remaining support could be found. Carl Sagan co-authored a widely popular book Intelligent Life in the Universe, with Soviet astrophysicist Iosif Shklovsky and published in 1966. In his 1979 book Broca's Brain, Sagan suggested that he and Shklovsky might have inspired the wave of 1970s ancient astronaut books, expressing disapproval of "von Däniken and other uncritical writers" who seemingly built on these ideas not as guarded speculations but as "valid evidence of extraterrestrial contact." Sagan pointed out that while many legends, artifacts, and purported out-of-place artifacts were cited in support of ancient astronaut hypotheses, "very few require more than passing mention" and could be easily explained with more conventional hypotheses. Sagan also reiterated his earlier conclusion that extraterrestrial visits to Earth were possible but unproven and improbable. Erich von Däniken was a leading proponent of this hypothesis in the late 1960s and early 1970s, gaining a large audience through the 1968 publication of his best-selling book Chariots of the Gods? and its sequels. According to von Däniken, certain artifacts require a more sophisticated technological ability in their construction than that which was available to the ancient cultures who constructed them. Von Däniken maintains that these artifacts were constructed either directly by extraterrestrial visitors or by humans who learned the necessary knowledge from said visitors. These include Stonehenge, Pumapunku, the Moai of Easter Island, the Great Pyramid of Giza, and the ancient Baghdad electric batteries. Von Däniken writes that ancient art and iconography throughout the world illustrates air and space vehicles, non-human but intelligent creatures, ancient astronauts, and artifacts of an anachronistically advanced technology. Von Däniken also states that geographically separated historical cultures share artistic themes, which he argues imply a common origin. One such example is von Däniken's interpretation of the sarcophagus lid recovered from the tomb of the Classic-era Maya ruler of Palenque, Pacal the Great. Von Däniken writes that the design represented a seated astronaut. The iconography and accompanying Maya text, however, identifies it as a portrait of the ruler himself with the World Tree of Maya mythology. The origins of many religions are interpreted by von Däniken as reactions to encounters with an alien race. According to his view, humans considered the technology of the aliens to be supernatural and the aliens themselves to be gods. Von Däniken states that the oral and written traditions of most religions contain references to alien visitors in the way of descriptions of stars and vehicular objects traveling through air and space. One such is Ezekiel's revelation, which Däniken interprets as a detailed description of a landing spacecraft (The Spaceships of Ezekiel). Von Däniken's hypotheses became popularized in the U.S. after the NBC-TV documentary In Search of Ancient Astronauts hosted by Rod Serling, and the film Chariots of the Gods. Critics argue that von Däniken misrepresented data, that many of his claims were unfounded, and that none of his core claims have been validated. In particular the Christian creationist community is highly critical of most of von Däniken's work. Young Earth creationist author Clifford A. Wilson published Crash Go the Chariots in 1972 in which he attempted to discredit all the claims made in Chariots of the Gods. In Chariots of the Gods?, regarding the Nazca Lines, von Däniken states that "Seen from the air, the clear-cut impression that the 37-mile (60 km) long plain of Nazca made on me was that of an airfield." Considering he was in the process of seeking evidence of ancient aliens, von Däniken exhibits confirmation bias, as he does not consider the Nazca Lines to be man-made until after the publication of Chariots of the Gods? This etic perspective that he presents could be easily accepted by a reader familiar with air travel, and an undeveloped knowledge of the nature of the geoglyphs. Furthermore, since the majority of readers of Chariots of the Gods? are not educated in viewing artifacts from ancient civilizations, their interpretations are highly subject to von Däniken's opinions of the artifacts. Kenneth L. Feder argues a reader seeing the Nazca Lines for the first time in a book about aliens would be much more likely to associate those features with extraterrestrial origins, rather than from a civilization that existed on Earth. In 1970, von Däniken admitted that the Nazca markings "could have been laid out on their gigantic scale by working from a model using a system of coordinates." Zecharia Sitchin's series The Earth Chronicles, beginning with The 12th Planet, revolves around Sitchin's unique interpretation of ancient Sumerian and Middle Eastern texts, megalithic sites, and artifacts from around the world. He hypothesizes that the gods of old Mesopotamia were astronauts from the planet "Nibiru", which Sitchin states the Sumerians believed to be a remote "12th planet" (counting the Sun, Moon, and Pluto as planets) associated with the god Marduk. According to Sitchin, Nibiru continues to orbit the Sun on a 3,600-year elongated orbit. Modern astronomy has found no evidence to support Sitchin's ideas. Sitchin argues that there are Sumerian texts that tell the story that 50 Anunnaki, inhabitants of a planet named Nibiru, came to Earth approximately 400,000 years ago with the intent of mining raw materials, especially gold, for transport back to Nibiru. With their small numbers they soon grew tired of the task and set out to genetically engineer laborers to work the mines. After much trial and error they eventually created Homo sapiens sapiens: the "Adapa" (model man) or Adam of later mythology. Sitchin contended the Anunnaki were active in human affairs until their culture was destroyed by global catastrophes caused by the abrupt end of the last ice age some 12,000 years ago. Seeing that humans survived and all they had built was destroyed, the Anunnaki left Earth after giving humans the opportunity and means to govern themselves. Sitchin's work has not received mainstream scholarly support and has been roundly criticized by professionals that have reviewed his hypotheses. Semitic languages scholar Michael S. Heiser says that many of Sitchin's translations of Sumerian and Mesopotamian words are not consistent with Mesopotamian cuneiform bilingual dictionaries, produced by ancient Akkadian scribes. Alan F. Alford, author of Gods of the New Millennium (1996), was an adherent of the ancient astronaut hypothesis. Much of his work draws on Sitchin's hypotheses. However, he now finds fault with Sitchin's hypothesis after deeper analysis, stating that: "I am now firmly of the opinion that these gods personified the falling sky; in other words, the descent of the gods was a poetic rendition of the cataclysm myth which stood at the heart of ancient Near Eastern religions." Robert K. G. Temple's 1976 book, The Sirius Mystery, argues that the Dogon people of northwestern Mali preserved an account of extraterrestrial visitation from around 5,000 years ago. He quotes various lines of evidence, including advanced astronomical knowledge inherited by the tribe, descriptions, and comparative belief systems with ancient civilizations such as ancient Egypt and Sumer. His work draws heavily on the studies of cultural anthropologists Marcel Griaule and Germaine Dieterlen. His conclusions have been criticized by scientists, who point out discrepancies within Temple's account, and suggested that the Dogon may have received some of their astronomical information recently, probably from European sources, and may have misrepresented Dogon ethnography. Various new religious movements including some branches of Theosophy, Scientology, Raëlism, Aetherius Society, and Heaven's Gate believe in ancient and present-day contact with extraterrestrial intelligence. Many of these faiths see both ancient scriptures and recent revelations as connected with the action of aliens from other planetary systems. Psychologists have found that UFO religions have similarities which suggest that members of these groups consciously or subliminally associate enchantment with the memes of science fiction. Claims of proponents Among scientists, the consensus is that the ancient astronaut hypothesis is not impossible, but unjustified and unnecessary. The "mysteries" cited as evidence for the hypothesis can be explained without having to invoke ancient astronauts; proponents look for mysteries where none exist. Since ancient astronauts are unnecessary, Occam's razor should be applied and the hypothesis rejected according to the scientific consensus. Proponents cite ancient mythologies to support their viewpoints based on the idea that ancient creation myths of gods who descend from the heavens to Earth to create or instruct humanity are representations of alien visitors, whose superior technology accounts for their perception as gods. Proponents draw an analogy to occurrences in modern time when isolated cultures are exposed to advanced technology, such as when, in the early 20th century, "cargo cults" were discovered in the South Pacific: cultures who believed various Western ships and their cargo to be sent from the gods as fulfillment of prophecies concerning their return.[user-generated source?] The ancient Sumerian myth of Enûma Eliš, inscribed on cuneiform tablets and part of the Library of Ashurbanipal, says humankind was created to serve gods called the "Annunaki". Hypothesis proponents believe that the Annunaki were aliens who came to Earth to mine gold for their own uses. According to the hypothesis proponents, the Annunaki realized mining gold was taking a toll on their race, and then created the human race as slaves. The Book of Genesis, Chapter 6 verses 1–2 and 4, states: When human beings began to increase in number on the earth and daughters were born to them, the sons of God saw that the daughters of humans were beautiful, and they married any of them they chose... The Nephilim were on the earth in those days—and also afterward—when the sons of God went to the daughters of humans and had children by them. — Genesis 6:1–4 (New International Version) Many Christians consider these groups to be the different families of Adam and Eve's children. Another interpretation is that the Nephilim are the children of the "sons of God" and "daughters of humans", although scholars are uncertain. The King James Version translates "Nephilim" as "giants" (or Gibborim). Ancient Astronaut proponents argue that Adam and Eve ate of the forbidden fruit in order "to be godlike", and this was the first step in human evolution.[citation needed] The first part of the apocryphal Book of Enoch expands and interprets Genesis 6:1: that the "sons of God" were a group of 200 "angels" called "Watchers", who descended to Earth to breed with humans. Their offspring are the Nephilim, "giants" who "consumed all the acquisitions of men". When humans could no longer sustain the Nephilim, they turned against humanity. The Watchers also instructed humans in metallurgy and metalworking, cosmetics, sorcery, astrology, astronomy, and meteorology. God then ordered the Watchers to be imprisoned in the ground, and created the Great Flood (or the numerous Deluge myths) to rid Earth of the Nephilim and of the humans given knowledge by the Watchers. To ensure humanity's survival, Noah is forewarned of the oncoming destruction. Because they disobeyed God, the book describes the Watchers as "fallen angels".[original research?] Some ancient astronaut proponents argue that this story is a historical account of extraterrestrials visiting Earth, called Watchers because their mission was to observe humanity. Some of the extraterrestrials disobeyed orders; they made contact with humans, cross-bred with human females, and shared knowledge with them. The Nephilim were thus half-human-half-extraterrestrial hybrids.[better source needed] Chuck Missler and Mark Eastman argue that modern UFOs carry the fallen angels, or offspring of fallen angels, and that "Noah's genealogy was not tarnished by the intrusion of fallen angels. It seems that this adulteration of the human gene pool was a major problem on the planet earth". Von Däniken also suggests that the two angels who visited Lot in Genesis 19 were ancient astronauts, who used atomic weapons to destroy the city of Sodom. Marc Dem reinterprets the Book of Genesis by writing that humanity started on another planet and that the God of the Bible is an extraterrestrial. Chapter 1 of the Book of Ezekiel recounts a vision in which Ezekiel sees "an immense cloud" that contains fire and emits lightning and "brilliant light". Within the cloud, the passage describes cherubim and ophanim: ...and in the fire was what looked like four living creatures. In appearance their form was human, but each of them had four faces and four wings. Their legs were straight; their feet were like those of a calf and gleamed like burnished bronze. Under their wings on their four sides they had human hands. All four of them had faces and wings, and the wings of one touched the wings of another. Each one went straight ahead; they did not turn as they moved... As I looked at the living creatures, I saw a wheel on the ground beside each creature with its four faces. This was the appearance and structure of the wheels: They sparkled like topaz, and all four looked alike. Each appeared to be made like a wheel intersecting a wheel. As they moved, they would go in any one of the four directions the creatures faced; the wheels did not change direction as the creatures went. Their rims were high and awesome, and all four rims were full of eyes all around. When the living creatures moved, the wheels beside them moved; and when the living creatures rose from the ground, the wheels also rose. — Ezekiel 1:5–9, 15–19 (New International Version) In Chapter 4 of Chariots of the Gods?, entitled "Was God an Astronaut?", von Däniken suggests that Ezekiel had seen a spaceship or spaceships; this hypothesis had been put forward by Morris Jessup in 1956 and by Arthur W. Orton in 1961. A detailed version of this hypothesis was described by Josef F. Blumrich in his book The Spaceships of Ezekiel (1974). The characteristics of the Ark of the Covenant and the Urim and Thummim have been said to suggest high technology, perhaps from alien origins. Robert Dione and Paul Misraki published books in the 1960s describing the events in the Bible as caused by alien technology. Barry Downing, a Presbyterian minister, wrote a book in 1968 arguing that Jesus was an extraterrestrial, citing John 8:23 and other biblical verses as evidence. Some ancient astronaut proponents such as Von Däniken and Barry Downing believe that the concept of hell in the Bible could be a real description of the planet Venus brought to Earth by extraterrestrials showing photos of the hot surface on Venus to humans.[citation needed] Proponents of the hypothesis state that 'God' and 'Satan' were aliens that disagreed on whether or not human beings should be allowed the information that is offered by the tree of knowledge. David Childress, a leading proponent of ancient astronaut creation hypothesis, compares this story to the Greek tale of Prometheus, who gave mankind the knowledge of fire. Ancient Astronaut proponents believe the biblical concept of Satan is based on a misunderstood visit by extraterrestrials. Erich von Däniken posited that the descendants of extraterrestrials had children with hominids, and this was referred to in the Bible as the "Original sin." Von Däniken believes that the biblical great flood was punishment after an extraterrestrial 'God' discovered that earthbound, fallen angels were mating with ape-like early humans. Childress and others have written that the passage in the Book of Invasions describing the arrival of the Tuatha Dé Danann in Ireland, records "the arrival of aliens in spacecraft with cloaking devices" at Slieve Anierin. The text states "so that they were the Tuatha De Danand who came to Ireland. In this wise they came, in dark clouds. They landed on the mountains of Conmaicne Rein in Connacht and they brought a darkness over the sun for three days and three nights". Ancient astronaut proponents believe Hopi cave drawings of Kachinas (spirit beings) found in the desert link the origins of the Hopi and Zuni tribes with "star people". They point to similar etchings elsewhere as evidence that extraterrestrials visited many different ancient civilizations.[citation needed] Other artistic support for the ancient astronaut hypothesis has been sought in Palaeolithic cave paintings. Wondjina in Australia and in the Rock Drawings in Valcamonica, in Italy (seen above) are said to bear a resemblance to present day astronauts. Supporters of the ancient astronaut hypothesis sometimes argue that similarities such as dome shaped heads, interpreted as beings wearing space helmets, prove that early man was visited by an extraterrestrial race. More support of this hypothesis draws upon what are said to be representations of flying saucers and other unidentified flying objects in Medieval and Renaissance art.[user-generated source] Some examples of these said objects include an ovoid shape in the sky of the painting Madonna con Bambino e San Giovannino (Madonna and Child with the Infant Saint John), an unidentified flying object in the Annunciazione (Annunciation) (1486) by Carlo Crivelli, a "spherical object with antennae" that appears similar to Sputnik in Bonaventura Salimbeni's Santissima Trinita (Holy Trinity) (1595), and many such unidentified flying objects in Masolino Da Panicale's Miracolo della neve (Miracle of the Snow) (1428). According to Italian art expert Diego Cuoghi, these objects contain religious symbolism behind them as most paintings of the time were of religious subjects. In such artworks, he says that angels and "radiant clouds" often appear in the sky. He says the object in the Madonna and Child is one of these radiant clouds, the object in the Annunciazione is a vortex of angels, the Sputnik-like object of Santissima Trinita is a globe representing creation with two sceptres held by God and Christ, and the Miracolo della neve contains many lenticular clouds. The ancient Nazca Lines are hundreds of huge ground drawings etched into the high desert of southern Peru. Some are stylized animals and humanoid figures, while others are merely straight lines hundreds of meters long. As the figures were made to be seen from a great height,[citation needed] they have been linked with the ancient astronaut hypothesis. In the 1970s, the pseudohistorical writer Erich von Däniken popularized a notion that the Nazca lines and figures could have been made "according to instructions from aircraft" and that the longer and wider lines might be runways for spacecraft. According to archaeologist Kenneth Feder, Von Däniken's extraterrestrial interpretation is not supported by any evidence. Feder wrote that "the lines are interpreted by archaeologists as ceremonial pathways of the ancient Nazca people; they were used precisely in this way in the fairly recent past." Joe Nickell of the University of Kentucky re-created one of the figures using only wooden stakes and string. Proponents of the ancient astronauts idea say some artifacts discovered in Egypt (the Saqqara Bird) and Colombia-Ecuador (Quimbaya artifacts) are similar to modern planes and gliders. These artifacts have been interpreted by mainstream archaeologists, however, as stylized representations of birds and insects.[citation needed] Proposed evidence for ancient astronauts includes the existence of ancient monuments and megalithic ruins such as the Giza pyramids of Egypt, Machu Picchu in Peru, or Baalbek in Lebanon, the Moai of Easter Island and Stonehenge of England. Supporters say that these stone structures could not have been built with the technical abilities and tools of the people of the time and further argue that many could not be duplicated even today. They suggest that the large size of the building stones, the precision with which they were laid, and the distances many were transported leaves the question open as to who constructed these sites.[citation needed] These ideas are categorically rejected by mainstream archeology. Some mainstream archeologists have participated in experiments to move large megaliths. These experiments have succeeded in moving megaliths up to at least 40 tons, and some have speculated that with a larger workforce larger megaliths could be towed with the use of known ancient technology. Von Däniken states that ancient Egypt, with its great structures of the Giza pyramid complex such as the Great Pyramid of Giza and the Great Sphinx of Giza, became a "fantastic, ready-made civilization" suddenly and without transitions and development. Ancient astronaut proponents suggest that sites like the pyramids of Giza were instead constructed by extraterrestrials. However, archaeological evidence demonstrates not only the long cultural trajectory of prehistoric Egypt but also the developmental processes the ancient Egyptians underwent. Egyptian tombs began with important leaders of villages being buried in the bedrock and covered with mounds of earth. Eventually, the first pharaohs had tombs covered with single-story, mud-brick, square structures called mastabas. The stepped pyramid developed out of multiple mastabas being stacked on each one in one structure. This led to the construction of pharaoh Djoser's Step Pyramid at Saqqara, which is known from records to have been built by the ancient Egyptian architect and advisor Imhotep. It was pharaoh Sneferu who had his pyramid transitioned from a stepped to a true pyramid like the well-known pyramids of Giza. A papyrus document like a logbook kept by an official called inspector Merer has also been discovered with records of the construction of the Great Pyramid. The Moai statues of Easter Island were moved miles from the Rano Raraku quarry to their current locations, and archaeologists have wondered how massive statues such as these could have been transported. The folklore of the native Rapa Nui people says that chiefs and priests used mana to make the statues of the island walk. In 1982, Czech engineer Pavel Pavel and a group of sixteen people used a replica concrete moai to test a method that could have transported the statues. They tied ropes to it and in two groups pulled and twisted it back and forth, making it move forward in a walking motion. They called it the "refrigerator method" and demonstrated that the massive statues could be easily moved by a small group of people. A number of ancient cultures, such as the ancient Egyptians and some Native Americans, artificially lengthened the skulls of their children. Some ancient astronaut proponents propose that this was done to emulate extraterrestrial visitors, whom they saw as gods. Among the ancient rulers depicted with elongated skulls are pharaoh Akhenaten and his wife Nefertiti. The depiction of Akhenaten and his family with traits like elongated skulls, limbs, underdeveloped torsos, and gynecomastia in Amarna art is hypothesized to be the effect of a familial disease. Marriage between family members, especially siblings, was common in ancient Egyptian royal families, elevating the risk of such disorders. Studies on the remains of the ruling family of 18th Dynasty Egypt have found evidence of deformities and illnesses. Proposed syndromes of Akhenaten include Loeys-Dietz syndrome, Marfan's syndrome, Frohlich syndrome, and Antley-Bixler syndrome. Akhenaten worshipped the sun disk god Aten and it is suggested that such worship could point to a disease that is alleviated by sunlight. In popular culture Ancient astronauts have been addressed frequently in science fiction and horror fiction in many different media. In a 2004 article in Skeptic magazine, Jason Colavito writes that von Däniken borrowed many of the book's concepts from Le Matin des magiciens (Morning of the Magicians), that this book in turn was heavily influenced by the Cthulhu Mythos, and that the core of the ancient astronaut hypothesis originates in H. P. Lovecraft's works "The Call of Cthulhu" and At the Mountains of Madness. Colavito later expanded on this idea in his book The Cult of Alien Gods: H. P. Lovecraft and Extraterrestrial Pop Culture. The idea that aliens visited Earth in the past is frequently seen in works of fiction. For example, the comic book Thor considers that all the Norse mythology is based on actual beings living in other dimensions, who were worshipped as gods by the Vikings and who reappear on Earth in modern times. Däniken's work, however, inspired several works and franchises over time, such as Eternals, Stargate, Indiana Jones and the Kingdom of the Crystal Skull, Prometheus and The X-Files. All those works do not take the idea seriously, but merely use it as a narrative device. Another angle may be to leave the aliens out of the story, and focus instead on devices they left behind, as in the novels Scarlet Dream, Galactic Derelict, World of Ptavvs, Toolmaker Koan, and A Fire Upon the Deep. Aliens may also appear as an elder race that created or shepherded humans in their early times; and may or may not be present in the work's present day. Ancient Aliens is a television series that features proponents of the ancient astronaut hypothesis, such as Giorgio A. Tsoukalos, David Childress, Erich von Däniken, Steven M. Greer, and Nick Pope.[failed verification] Proponents Many publications have argued for the ancient astronauts hypotheses. The following are notable examples: See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-61] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/17/running-ai-models-is-turning-into-a-memory-game/] | [TOKENS: 975] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Running AI models is turning into a memory game When we talk about the cost of AI infrastructure, the focus is usually on Nvidia and GPUs — but memory is an increasingly important part of the picture. As hyperscalers prepare to build out billions of dollars’ worth of new data centers, the price for DRAM chips has jumped roughly 7x in the last year. At the same time, there’s a growing discipline in orchestrating all that memory to make sure the right data gets to the right agent at the right time. The companies that master it will be able to make the same queries with fewer tokens, which can be the difference between folding and staying in business. Semiconductor analyst Doug O’Laughlin has an interesting look at the importance of memory chips on his Substack, where he talks with Val Bercovici, chief AI officer at Weka. They’re both semiconductor guys, so the focus is more on the chips than the broader architecture; the implications for AI software are pretty significant too. I was particularly struck by this passage, in which Bercovici looks at the growing complexity of Anthropic’s prompt-caching documentation: The tell is if we go to Anthropic’s prompt caching pricing page. It started off as a very simple page six or seven months ago, especially as Claude Code was launching — just “use caching, it’s cheaper.” Now it’s an encyclopedia of advice on exactly how many cache writes to pre-buy. You’ve got 5-minute tiers, which are very common across the industry, or 1-hour tiers — and nothing above. That’s a really important tell. Then of course you’ve got all sorts of arbitrage opportunities around the pricing for cache reads based on how many cache writes you’ve pre-purchased. The question here is how long Claude holds your prompt in cached memory: You can pay for a 5-minute window, or pay more for an hour-long window. It’s much cheaper to draw on data that’s still in the cache, so if you manage it right, you can save an awful lot. There is a catch though: Every new bit of data you add to the query may bump something else out of the cache window. This is complex stuff, but the upshot is simple enough: Managing memory in AI models is going to be a huge part of AI going forward. Companies that do it well are going to rise to the top. And there is plenty of progress to be made in this new field. Back in October, I covered a startup called Tensormesh that was working on one layer in the stack known as cache optimization. Opportunities exist in other parts of the stack. For instance, lower down the stack, there’s the question of how data centers are using the different types of memory they have. (The interview includes a nice discussion of when DRAM chips are used instead of HBM, although it’s pretty deep in the hardware weeds.) Higher up the stack, end users are figuring out how to structure their model swarms to take advantage of the shared cache. As companies get better at memory orchestration, they’ll use fewer tokens and inference will get cheaper. Meanwhile, models are getting more efficient at processing each token, pushing the cost down still further. As server costs drop, a lot of applications that don’t seem viable now will start to edge into profitability. Topics AI Editor Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_ref-Organized_Family_Crime_10-0] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-59] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://github.com/resources/whitepapers] | [TOKENS: 489] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Ebooks & Whitepapers Browse our collection of Ebooks and Whitepapers for valuable industry knowledge, trends, and strategies to help you stay ahead and make informed decisions. Filters Content Type Category Discover the Gartner® roadmap for achieving 25% to 30% productivity gains by applying AI across the entire software development lifecycle. Learn why Gartner positioned GitHub as a Leader for the second year in a row—highest and furthest in both Ability to Execute and Completeness of Vision. This whitepaper provides a clear roadmap for navigating this new landscape, showing how GitHub’s AI-powered platform can empower your teams and strengthen governance. Read the full Forrester TEI study and use the interactive ROI calculator to model results for your organization. The Forrester Industry Spotlight on GitHub Advanced Security shows how enterprises achieve measurable gains in security efficiency, risk reduction, and developer productivity. Discover what AI agents can really do for your organization — and how they’re already reshaping the way software gets built. GitHub was named a Leader in the IDC MarketScape for AI Coding and Software Engineering Technologies. Forrester has named Microsoft, recognizing both GitHub and Azure DevOps, a Leader in The Forrester Wave™: DevOps Platforms, Q2 2025 report. What GitHub data reveals as the top priorities for modern engineering teams. Developers work alongside Copilot to write code, generate tests, fix bugs, create documentation, and much more. To fully realize Copilot’s potential, entire teams, not just individual developers, must adopt new skills. While Copilot may be a tool like any other, generative AI presents unique adoption challenges that require specific solutions. Boost the productivity and time to value of your DevOps practice with the integrated AI and security capabilities of the GitHub platform with Azure integration. DevOps is a transformative practice—and not only because it helps to build better software. It also aligns teams, from IT to engineering to security, removing siloed workstreams and promoting collaboration. As great as this sounds, pulling together your DevOps processes and tools requires some practice to make your strategy perfect. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Noachis_Terra] | [TOKENS: 135] |
Contents Noachis Terra Noachis Terra (/ˈnoʊəkɪs/; lit. "Land of Noah") is an extensive southern landmass (terra) of the planet Mars. It lies west of the giant Hellas impact basin, roughly between the latitudes −20° and −80° and longitudes 30° west and 30° east, centered on 45°S 350°E / 45°S 350°E / -45; 350. It is in the Noachis quadrangle. The term "Noachian epoch" is derived from this region. Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson_classification_system#Anecdotes_and_jokes] | [TOKENS: 2451] |
Contents Aarne–Thompson–Uther Index The Aarne–Thompson–Uther Index (ATU Index) is a catalogue of folktale types used in folklore studies. The ATU index is the product of a series of revisions and expansions by an international group of scholars: Originally published in German by Finnish folklorist Antti Aarne (1910), the index was translated into English, revised, and expanded by American folklorist Stith Thompson (1928, 1961), and later further revised and expanded by German folklorist Hans-Jörg Uther (2004). The ATU index is an essential tool for folklorists, used along with the Thompson (1932) Motif-Index of Folk-Literature. Background Austrian consul Johann Georg von Hahn devised a preliminary analysis of some 40 tale "formulae" as introduction to his book of Greek and Albanian folktales, published in 1864. Reverend Sabine Baring-Gould, in 1866, translated von Hahn's list and extended it to 52 tale types, which he called "story radicals". Folklorist J. Jacobs expanded the list to 70 tale types and published it as "Appendix C" in Burne & Gomme's Handbook of Folk-Lore. Before the edition of Antti Aarne's first folktale classification, Astrid Lunding translated Svend Grundtvig's system of folktale classification. This catalogue consisted of 134 types, mostly based on Danish folktale compilations in comparison to international collections available at the time by other folklorists, such as the Brothers Grimm's and Emmanuel Cosquin's. Antti Aarne was a student of Julius Krohn and his son Kaarle Krohn. Aarne developed the historic-geographic method of comparative folkloristics, and developed the initial version of what became the Aarne–Thompson tale type index for classifying folktales, first published in 1910 as Verzeichnis der Märchentypen ("List of Fairy Tale Types"). The system was based on identifying motifs and the repeated narrative ideas that can be seen as the building-blocks of traditional narrative; its scope was European. The American folklorist Stith Thompson revised Aarne's classification system in 1928, enlarging its scope, while also translating it from German into English. In doing so, he created the "AT number system" (also referred to as "AaTh system") which remained in use through the second half of the century. Another edition with further revisions by Thompson followed in 1961. According to American folklorist D.L. Ashliman, The Aarne–Thompson system catalogues some 2500 basic plots from which, for countless generations, European and Near Eastern storytellers have built their tales. — The AT-number system was updated and expanded in 2004 with the publication of The Types of International Folktales: A Classification and Bibliography by German folklorist H.-J. Uther. Uther noted that many of the earlier descriptions were cursory and often imprecise, that many "irregular types" are in fact old and widespread, and that "emphasis on oral tradition" often obscured "older, written versions of the tale types". To remedy these shortcomings Uther developed the Aarne–Thompson–Uther (ATU) classification system and included more tales from eastern and southern Europe as well as "smaller narrative forms" in this expanded listing. He also put the emphasis of the collection more explicitly on international folktales, removing examples whose attestation was limited to one ethnic group. Index In The Folktale, Thompson defines a tale type as follows: A type is a traditional tale that has an independent existence. It may be told as a complete narrative and does not depend for its meaning on any other tale. It may indeed happen to be told with another tale, but the fact that it may be told alone attests its independence. It may consist of only one motif or of many. — Thompson (1977), p. 415 The Aarne–Thompson Tale Type Index divides tales into sections with an AT number for each entry. The names given are typical, but usage varies; the same tale type number may be referred to by its central motif or by one of the variant folktales of that type, which can also vary, especially when used in different countries and cultures. The name does not have to be strictly literal for every folktale. For example, The Cat as Helper (545B) also includes tales where a fox helps the hero. Closely related folktales are often grouped within a type. For example, tale types 400–424 all feature brides or wives as the primary protagonist, for instance The Quest for a Lost Bride (400) or the Animal Bride (402). Subtypes within a tale type are designated by the addition of a letter to the AT number, for instance: tale 510, Persecuted Heroine (renamed in Uther's revision as Cinderella and Peau d'Âne ["Cinderella and Donkey Skin"]), has subtypes 510A, Cinderella, and 510B, Catskin (renamed in Uther's revision as Peau d'Asne [also "Donkey Skin"]). As an example, the entry for 510A in the ATU index (with cross-references to motifs in Thompson's Motif-Index of Folk Literature in square brackets, and variants in parentheses) reads: 510A Cinderella. (Cenerentola, Cendrillon, Aschenputtel.) A young woman is mistreated by her stepmother and stepsisters [S31, L55] and has to live in the ashes as a servant. When the sisters and the stepmother go to a ball (church), they give Cinderella an impossible task (e.g. sorting peas from ashes), which she accomplishes with the help of birds [B450]. She obtains beautiful clothing from a supernatural being [D1050.1, N815] or a tree that grows on the grave of her deceased mother [D815.1, D842.1, E323.2] and goes unknown to the ball. A prince falls in love with her [N711.6, N711.4], but she has to leave the ball early [C761.3]. The same thing happens on the next evening, but on the third evening, she loses one of her shoes [R221, F823.2]. The prince will marry only the woman whom the shoe fits [H36.1]. The stepsisters cut pieces off their feet in order to make them fit into the shoe [K1911.3.3.1], but a bird calls attention to this deceit. Cinderella, who had first been hidden from the prince, tries on the shoe and it fits her. The prince marries her. Combinations: This type is usually combined with episodes of one or more other types, esp. 327A, 403, 480, 510B, and also 408, 409, 431, 450, 511, 511A, 707, and 923. Remarks: Documented by Basile, Pentamerone (I,6) in the 17th century. The entry concludes, like others in the catalogue, with a long list of references to secondary literature on the tale, and variants of it.(pp284–286) Critical response In his 1997 essay "The motif-index and the tale type index: A critique", American folklorist Alan Dundes explains that the Aarne–Thompson indexes are some of the "most valuable tools in the professional folklorist's arsenal of aids for analysis". They have, however, been subject to criticism concerning their construction, where they apply, and what they exclude. The tale type index was criticized by V. Propp of the Russian Formalist school of the 1920s for ignoring the functions of the motifs by which they are classified. Furthermore, Propp contended that using a "macro-level" analysis means that the stories that share motifs might not be classified together, while stories with wide divergences may be grouped under one tale type because the index must select some features as salient.[a] He also observed that although the distinction between animal tales and tales of the fantastic was basically correct – no one would classify "Tsarevitch Ivan, the Fire Bird and the Gray Wolf" as an animal tale, just because of the wolf – it did raise questions because animal tales often contained fantastic elements, and tales of the fantastic often contained animals; indeed a tale could shift categories if a peasant deceived a bear rather than a devil. In 2009, describing the motivation for his work, Uther presents several criticisms of the original index. He points out that Thompson's focus on oral tradition sometimes neglects older versions of stories, even when written records exist, and that some included folktale types have dubious importance. In regards to the typological classification, some folklorists and tale comparativists have acknowledged singular tale types that, due to their own characteristics, would merit their own type.[b] Although such tales often have not been listed in the international folktale system, they can exist in regional or national classification systems. In his 2009 critique, Uther finds that the distribution of stories is uneven (with Eastern and Southern European as well as many other regions' folktale types being under-represented). Similarly, Thompson had noted that the tale type index might well be called The Types of the Folk-Tales of Europe, West Asia, and the Lands Settled by these Peoples. However, Dundes notes that in spite of the flaws of tale type indexes (including typos, redundancies, and censorship):(p198) they represent the keystones for the comparative method in folkloristics, a method which despite postmodern naysayers ... continues to be the hallmark of international folkloristics. — (p200) The ATU folktype index has been criticized for its apparent geographic concentration on Europe and North Africa, showing over-representation of Eurasia[c] and North America. The catalogue appears to ignore or under-represent other regions. Central Asian examples include: Yuri Berezkin [ru]'s The captive Khan and the clever daughter-in-law (and variants); The travelling girl and her helpful siblings; and Woman's magical horse, as named by researcher Veronica Muskheli of the University of Washington. Author Pete Jordi Wood claims that topics related to homosexuality have been excluded intentionally from the type index. Similarly, folklorist Joseph P. Goodwin states that Thompson omitted "much of the extensive body of sexual and 'obscene' material", and that – as of 1995 – "topics like homosexuality are still largely excluded from the type and motif indexes." In a 2002 essay, Alan Dundes also criticized Thompson's handling of the folkloric subject material, which he considered to be "excessive prudery" and a form of censorship. Distribution by origin A quantitative study published by folklorist S. Graça da Silva and anthropologist J.J. Tehrani in 2016, tried to evaluate the time of emergence for the "Tales of Magic" (ATU 300–ATU 749), based on a phylogenetic model. They found four of them to belong to the Proto-Indo-European stratum of magic tales.[d] Ten more magic tales were found to be current throughout the Western branch of the Indo-European languages, comprising the main European language families derived from PIE (i.e. Balto-Slavic, Germanic, Italic and Celtic): Particular items Example See also Notes References Bibliography Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/17/anthropic-releases-sonnet-4-6/] | [TOKENS: 542] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Posted: Anthropic releases Sonnet 4.6 Anthropic has released a new version of its midsized Sonnet model, keeping pace with the company’s four-month update cycle. In a post announcing the new model, Anthropic emphasized improvements in coding, instruction-following, and computer use. Sonnet 4.6 will be the default model for Free and Pro plan users. The beta release of Sonnet 4.6 will include a context window of 1 million tokens, twice the size of the largest window previously available for Sonnet. Anthropic described the new context window as “enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request.” The release comes just two weeks after the launch of Opus 4.6, with an updated Haiku model likely to follow in the coming weeks. The launch comes with a new set of record benchmark scores, including OS World for computer use and SWE-Bench for software engineering. But perhaps the most impressive is its 60.4% score on ARC-AGI-2, meant to measure skills specific to human intelligence. The score puts Sonnet 4.6 above most comparable models, although it still trails models like Opus 4.6, Gemini 3 Deep Think, and one refined version of GPT 5.2. Topics Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch Mobility is your destination for transportation news and insight. Startups are the core of TechCrunch, so get our best coverage delivered weekly. Provides movers and shakers with the info they need to start their day. By submitting your email, you agree to our Terms and Privacy Notice. Related Latest in AI © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dual_dating#China] | [TOKENS: 1739] |
Contents Dual dating Dual dating is the practice, in historical materials, of indicating a date with what appear to be duplicate or excessive digits: these may be separated by a hyphen or a slash, or placed one above the other. The need for dual dating arose from the transition from an older calendar to a newer one. Another method used is to give the date of an event according to one calendar, followed in parentheses by the date of the same event in the other calendar, appending an indicator to each to specify which reference calendar applies. As an example, in the date "10/21 February 1750/51" – a style seen in the records of Great Britain and its possessions – the notation arises from the prospective or previous adoption of the Gregorian calendar and a concurrent calendar reform. (The dual day number is due to the eleven days difference (at the time) between the Julian calendar date and the Gregorian one; the dual year is due to a change of start of year, from 25 March to 1 January.) After the Calendar (New Style) Act 1750 was passed, the notations "OS" (old style) and "NS" (new style) became dominant in historical writing about British and American events in the eighteenth century. This notation is also used when writing about Russian events in the early twentieth century. European countries and their colonies: Old Style and New Style dates Long before the British Empire adopted the Gregorian calendar, the date of the start of the year caused difficulties.[a] Until 1752, England, Wales, Ireland and the American colonies started the legal year on 25 March, whereas Scotland (since 1600), as well as common usage, started the year on 1 January.[b] This meant that a date such as 29 January, while being toward the end of a legal year, would also be near the beginning of the following "common" (and Scottish) year. It was to show this duality that the system of displaying two year numbers first came into use — examples may be seen on memorial tablets and in parish registers. Dating based on the year beginning on 25 March became known as "Annunciation Style" dates, while dates of the year commencing on 1 January were described as "Circumcision Style" dates, because this was the date of the Feast of the Circumcision, commemorating the eighth day of Jesus Christ's life after his birth, counted from its observation on Christmas Day (25 December). In 1752, England and its possessions changed the start of the year to 1 January, and also adopted the Gregorian calendar (on 2 September[c]). Thereafter, the terms "Old Style" (OS) and "New Style" (NS) were more commonly added to dates when it proved necessary or expedient to identify which calendar was being used for the given date. Often, both were given — for example: 20 January 1708 (OS) (1709 (NS)). There may be some confusion as to which calendar alteration OS or NS refers to: the change of the start of the year, or the transition of one style of calendar to another. Historically, OS referred only to the start of the year change, to 1 January from 25 March, and some historians still believe this is the best practice. However, OS and NS may refer to both alterations of the calendar: constructions like 31 August [O.S. 20 August] 1753 may be seen. During the period between 1582, when the first countries adopted the Gregorian calendar, and 1923, when the last European country adopted it,[d] it was often necessary to indicate the date of an event in both the Julian calendar and the Gregorian calendar. Although the OS/NS notation was originally used only to clarify the date of events in the British Empire, the usage was reprised in more recent English-language histories of Russia, which retained the Julian calendar until 1918. For example, the beginning of the October Revolution may be recorded as 25 October [N.S. 7 November] 1917 (or 7 November [O.S. 25 October] 1917). East Asia Japan, Korea, and China started using the Gregorian calendar on 1 January 1873, 1896, and 1912, respectively. They had used lunisolar calendars previously. None of them used the Julian calendar; the Old Style and New Style dates in these countries usually mean the older lunisolar dates and the newer Gregorian calendar dates respectively. In these countries, the old style calendars were similar, but not all the same. The Arabic numerals may be used for both calendar dates in modern Japanese and Korean languages, but not in Chinese language. Japan started using the Gregorian calendar on 1 January 1873, locally known as "the first day of the first month of Meiji 6" (明治6年1月1日, Meiji rokunen ichigatsu tsuitachi). The preceding day, 31 December 1872, was "the second day of the twelfth month of Meiji 5" (明治5年12月2日, Meiji gonen jūnigatsu futsuka). Japan currently employs two calendar systems: Gregorian and the Japanese era name calendar. Specifically, the months and days now correspond to those of the Gregorian calendar, but the year is expressed as an offset of the era. For example, the Gregorian year 2007 corresponds to Heisei 19. An era does not necessarily begin on 1 January. For example, 7 January Shōwa 64—the day of the death of Emperor Shōwa—was followed by 8 January Heisei 1, which lasted until 31 December. Korea started using the Gregorian calendar on 1 January 1896, which was the 17th day of the 11th lunar month in not only Korea, but also in China that still used the lunisolar calendar. The lunisolar Korean calendar is now used in very limited unofficial purposes only. The Republic of China (ROC) started using the Gregorian calendar on 1 January 1912, but the lunisolar Chinese calendar is still used along with the Gregorian calendar, especially when determining certain traditional holidays. The reference has been a longitude of 120°E since 1929, which is also used for Chinese Standard Time (UTC+8). Mainland China, Hong Kong, Macau, Malaysia, Indonesia, Singapore and Taiwan all have legal holidays based on the lunisolar Chinese calendar, with the most important one being the Chinese New Year. From 1995, to visually distinguish old and new style dates, writing new style dates with Arabic numerals but old style dates with Chinese characters (never Arabic numerals) was the standard in the People's Republic of China (PRC). Since 1 November 2011, writing old style dates with Chinese characters, never Arabic numerals, remains the standard in the PRC, but new style dates may be written with either Arabic or Chinese numerals. In Taiwan, even though new style dates are written in Chinese characters in very formal texts, it is now common to see Arabic numerals in new style dates in less formal texts. When writing old style dates, Chinese characters are usually used, but Arabic numerals may still be seen.[e] The calendar year in Taiwan is usually expressed as the "Year of the Republic" — counting Year 1 as the foundation of the Republic of China in 1912 CE, so the current Gregorian year 2026 corresponds to the ROC year 115. Use of dates from historical documents in modern documents There was some confusion when calendars changed, and the confusion may continue today when evaluating historical sources. When 'translating' dates from secondary historical sources for current use, for dates in January, February and March it is advised[by whom?] that both year numbers be entered into modern documents until a copy of the original primary source can be checked, verifying which style was used in the 'official record'. Errors were often made in the early 19th century and have been perpetuated.[citation needed] In either case, to avoid further confusion, modern researchers are advised to be vigilant about annotating all dates with a notation indicating the Style of date, and to use a slash rather than a hyphen to indicate alternate dates. See also References Footnotes |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-64] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kaifeng_Jews] | [TOKENS: 7525] |
Contents Kaifeng Jews Kaifeng Jews[a] are a small community of descendants of Chinese Jews in Kaifeng, Henan province, China. In the early centuries of their settlement, they may have numbered around 2,500 people. Despite their isolation from the rest of the Jewish diaspora, their ancestors managed to practice Jewish traditions and customs for several centuries. The distinctive customary life of the Kaifeng community slowly eroded, as assimilation and intermarriage with non-Jewish (Han Chinese and Hui) neighbors advanced, until, by the 19th century, its Jewishness largely became extinct, apart from its retention of memories of its clan's Jewish past as the Jews became Chinese.[b] The place of origin of these Jews and the date when they established their settlement in Kaifeng are sources of intense debate among experts. While the descendants of the Kaifeng Jews are assimilated into mainstream Kaifeng Han culture, some of them are trying to revive the beliefs and customs of their ancestors. In the 21st century, efforts have been made to revive Kaifeng's Jewish heritage and encourage the descendants of its original population to convert back to Judaism. Via the offices of the Shavei Israel organization, 19 young Chinese descendants of Kaifeng Jews have made aliyah, and in the process have had to do a formal conversion. Since 2015, descendants of the Kaifeng Jews have come under government pressure and suspicion whether or not they qualify as Jews. History The origin of the Kaifeng Jews and the date of their arrival is one of the most intensely debated topics in the field of Chinese-Jewish relations.[c] Though some scholars date their arrival to the Tang dynasty (618–907), or even earlier,[d] Steven Sharot, reflecting the majority view, considers that the most probable date for the formation of a Jewish community in Kaifeng was sometime during the Song dynasty (960–1279). That Jewish merchants were active before Song China appears probable from the fact that the Muslim Persian geographer ibn Khordadbeh in his Book of Roads and Kingdoms ca. 870 describes the Jewish Radhanite merchants as operating over a wide arc from Western Europe to China.[e] It has been conjectured that this group constituted the first of two waves of Jewish settlement in China, the second being associated with the Mongol conquest of China and the establishment of the Yuan dynasty.[f] According to a scholarly consensus, the Jewish community of Kaifeng primarily consisted of people of Persian Jewish origin. Uncertainty persists as to whether they came overland through Chang'an, via either of the two Silk Roads,[g] or whether they travelled inland after they reached coastal cities like Guangzhou or Quanzhou by sea. Ibn Khordadbeh's Rhadanites used both routes. Some evidence has been interpreted to suggest that their ancestors may have mostly hailed from the Bukharan Jewish branch of Persian Jews who had settled in Central Asia. In all likelihood, all of the founders of the community were male Jewish merchants: the arduous, dangerous nature of the route, and the length of time which they needed to spend in order to travel on it, would have probably forced them to rule out bringing their wives, and after they settled in Kaifeng, they married Chinese women.[h] Among the vast trove of documents which Aurel Stein discovered in Dunhuang in Gansu in northwestern China was a bill of sale for sheep which dated back to 718 written in new New Persian using the Hebrew alphabet.[i] They wrote their documents on paper, something which was unavailable in the West, together with a fragment of a Seliḥoth that was probably composed in the eighth or ninth century. A century later, the Arab geographer Abū Zayd Ḥasan al-Sīrāfī mentioned in 910 the Guangzhou massacre of 878/9, in which Muslims, Oriental Christians, and Jews were killed, attesting to the latter group's presence in China. Trade with China was predominantly maritime and dominated by Arabs, with many Jews also engaged in this network. By the 11th century, more than a million Arabs lived in port enclaves, where they were allowed self-administration. At least seven synagogue communities are attested for this period in all major Chinese port cities, such as Ningbo City, Guangzhou and Hangzhou. Goods from these coastal centres were transported inland via the Grand Canal to the Yellow River and then by barge to Kaifeng. The Jewish community that was eventually established in Kaifeng survived the collapse of these sister communities on the eastern seaboard, all of which disappeared in the 15–16th centuries when the Ming dynasty's ability to protect its coast was crippled by raids by Wokou. The point of departure for determining precisely when a community (kehillah) was established relies on two forms of evidence: the information surviving in inscriptions from four stelae recovered from Kaifeng, and references in Chinese dynastic sources. The dates on the stelae range from 1489 through 1512 and 1663 to 1679. Chinese documents on Jews are rare compared to the voluminous records of other peoples.[j] The first official documents referring to the Jews as a distinct group date to the 12th century. Two Chinese scholars have argued that the Jews went to China in 998, because the Song History records that in the year 998, a monk (僧) named Ni-wei-ni (你尾尼; Nǐ wěi ní) and others had spent seven years traveling from India to China in order to pay homage to the Song Emperor Zhenzong. They identified this Ni-wei-ni as a Jewish rabbi. Others followed this up with a claim that the Song History provides a precise date for a large population of Jewish expatriates accompanying Ni-wei-ni from India who putatively arrived in Kaifeng on 20 February 998. These inferences contradict Buddhist records for Ni-wei-ni's visit.[k] Both the sēng (僧) used to describe Ni-wei-ni in the Song dynastic history and the shāmén (沙門) in the Buddha Almanac of Zhi-pan mean "Buddhist monk", not rabbi. Furthermore, Ni-wei-ni did not bring Western cloth with him, but banyan seeds. The earliest stele erected by the Kaifeng community bears the date 1489. This, it is affirmed, commemorates the construction in 1163 of a synagogue called Qingzhensi (清真寺; qīngzhēnsì; 'True and pure Temple'), the customary term for mosques in China.[l] The inscription states that the Jews came to China from Tiānzhú (天竺),[m] a Han-Song term for India. It adds that they brought Western cloth as tribute for an emperor, unnamed, who welcomed them with the words: "You have come to Our China; reverence and preserve the customs of your ancestors, and hand them down at Bianliang (汴梁; Biànliáng)," i.e., Kaifeng. The same stone inscription also associates the building's establishment with two names: An-du-la (俺都喇; Ăndūlǎ perhaps Abdullah)[n] and a certain Lieh-wei (列微; Liè wēi),[o] probably transcribing Levi, who is described as the Wu-ssu-ta (五思達; Wǔsīdá) of the community. This last term probably is a phonetic rendering of the Persian word ustad, ("master", religious leader), and "rabbi" in a Jewish context in that language. According to Irene Eber, among others, believe that the Jews must have settled in this Song dynasty capital city of Kaifeng, then also known as Bianjing, no later than 1120, some years before the Song-Jin alliance broke down. In 1163, the synagogue of Kaifeng city was established. The 1489 stele speaks of its establishment coinciding with the first year of the Longxing (隆興; Lóngxīng) era of the Song emperor Xiaozong (孝宗; Xiàozōng), namely 1161, which sets the synagogue's establishment in the first year of the reign of the Jurchen Emperor Jin Shizong (金世宗; Jīn Shìzōng), within whose territory Kaifeng lay. If the city was Jurchen, it is asked, why does the stele associate its foundation with the Song? Recently, Peng Yu has challenged the Song-entry consensus, favouring instead a variant of the "second wave" theory of Kaifeng Jewish origins, one version of which holds that Jews probably figured among the large number of peoples collectively known as the Semu (色目人; sèmùrén) who were captured during Mongol campaigns in the West and conveyed east to serve in the bureaucracy and assist the Mongols in administering China after its conquest. The two names associated in 1489 with the establishment of the synagogue in 1163, An-du-la and Lieh-wei (namely Abdullah and Levi), are in Yu's interpretation retrodated from later times. An-du-la, on the basis of the 1679 stele, he reads as the religious name of the An Cheng (俺誠; Ăn Chéng), said to be a Kaifeng Jewish physician, who "restored" the synagogue in 1421 (not 1163).[q] According to the Diary of the Defence of Pien,[r] the Kaifeng Jewish Li/Levi clan, from whose ranks some 14 manla or synagogue leaders were drawn, only arrived in Kaifeng after relocating from Beijing during the Hung Wu period (1368–1398) of the Ming dynasty. Yu's Yuan-entry theory claims that the Kaifeng Jews entered China together with the Muslim Hui-hui people during the Mongol Yuan dynasty. The Jews themselves were defined as a Hui people, due to similarities between Jewish and Islamic traditions. They were called blue hat Hui (藍帽回回; lánmào huíhuí) as opposed to the "white cap Hui" (白帽回回; báimào huíhuí), who were Muslims.[s] Chinese sources do not mention the existence of Chinese Jews until the Mongol Yuan dynasty. The explanation for these contradictions within the various stelae must lie, Yu thinks, in the impact of Ming imperial policies aiming to constrain peoples such as the Semu, who came en masse with the Mongols peoples, to assimilate to the culture of the revived Han hegemony. The dynasty was marked by a distinct anti-foreign sentiment expressed in coercive decrees that enforced assimilation, and therefore, Yu infers, the Kaifeng Jews, under the Ming, claimed in their monumental stone inscriptions that their roots in China were ancient, going back at least to the nativist Song if not indeed to the Han period.[t] The stele sought to assert proof of a long accommodation by Jews to Chinese civilization in order to avoid discriminatory measures. Kaifeng was a cosmopolitan industrial metropolis with 600,000 to a million inhabitants in Northern Song times, which formed an intense hub for overland trade via the Silk Road and the commercial riverine networks connecting it to the eastern seaboard. Through it vast amounts of grain tribute also passed. Its strategic importance and wealth were recognized by successive dynastic powers over the period 905–959, such as the Liang (who gave it its present name), Jin, Later Han and Later Zhou who all made it their capital, as did the Northern Song when they unified the empire, until the city was conquered by the Jurchen in 1127. Under siege, it surrendered to the Mongols in 1233. It would have been attractive to Persian Jewish merchants. The founding colony's members may have specialized in the manufacturing, dyeing, or pattern printing of cotton fabrics. By the early 16th century, an inscription mentions not only craftsmen, farmers and traders among them, but also scholars, physicians and officials, political and administrative, as well as military men in important posts. A Ming emperor conferred eight surnames upon the Jews. Other evidence points to 70–73 surnames.[u] The late 1672 inscription states that at the synagogue's inception (1163) there were 73 clans (姓; xìng) and 500 families (家; jiā) in the Kaifeng Jewish community. The Hongzhi stele (1489) (弘治碑; hóngzhìbēi) registers the names of 14 clans. Leaders among this community were called manla (暪喇; mánlǎ), a term usually explained as a loanword from Arabic mullah. It has been suggested however that it may well have been a phonetic transcription of the Hebrew ma'lā (מעלה) "the honourable".[v] The Persian rubrics of the Kaifeng Jewish liturgy are written in the Bukharan dialect and the Bukharan Jews believe that in the past, some of their kin migrated to China and ceased to have contact with their country of origin. Many of the known Hebrew names of the Kaifeng Jews were only found among Persian and Babylonian Jews. Jewish written sources do not mention how the Jews arrived in Kaifeng, though a legend says that they arrived by land on the Silk Road. Some Jesuit reports inaccurately stated the Kaifeng Jews did not intermarry. The Ming dynasty (1368–1644), in reaction to the foreign dynasty it replaced, laid down a policy of discrimination against foreigners, such as the resident Mongols and Semu. Laws regarding ethnic endogamy were issued that forbade members of such communities from marrying within their own groups. They were at liberty to marry only Han Chinese. Failure to do so would lead to enslavement. To what degree these measures were applied is unknown, but is evident from their Memorial Book that intermarriage took place on a large scale among the Kaifeng Jews certainly from Ming and it may be assumed, in Qing times. From the 19th century onwards it became the norm. They followed the Chinese custom of foot binding.[w] The custom of the levirate marriage was retained, and polygamy was practiced: one Kaifeng Jew, the Zhang (張) clan's Zhang Mei, is recorded in the Memorial Book as having six wives, while Jin Rong-Zhang from the Jin clan (金) had five.[x] Towards the end of the Ming period, calculations based on the community's memorial book suggest that the Kaifeng Jewish community amounted to some 4,000 people. The catastrophic flood of 1642 brought about a precipitous drop in their population as the Flood killed 3000 Jews. The flood also destroyed the synagogue. Considerable efforts were made to save the scriptures. One man of the Gao clan, Gao Xuan, dove repeatedly into the flooded synagogue to rescue what he could and afterward all seven clans helped restore and rewrite the 13 scrolls. They obtained some from Ningxia and Ningbo to replace them, and another Hebrew Torah scroll was bought from a Muslim in Ningqiangzhou (in Shaanxi), who acquired it from a dying Jew at Canton.[y] When Kaifeng Jews introduced themselves to the Jesuits in 1605, they called themselves members of the house of "Israel" (一賜樂業; Yīcìlèyè)[z] The Jesuits also noted that a Chinese exonym[aa] labelled them as Tiao jin jiao, "the sect that plucks the sinews" (挑筋教; Tiāojīn jiào).[ab] This term arose from observing that, in memory of Jacob's wrestling with the angel, their butchers extracted the sciatic nerve (Gid hanasheh) as required in Nikkur, marking them as distinct from Muslims who otherwise, like them, also refrained from eating pork. The evidence on the stelae shows that they identified the emergence of Judaism as coinciding with the early Zhou dynasty (c. 1046–256 BCE, in modern reckoning). Abraham (阿無羅漢; Āwúluóhàn) was recorded as wakening as from sleep to the 19th generation from Pangu[ac]-Adam (阿躭; Ādān), and grasping profound mysteries, founded Judaism. This is said to have occurred in the 146th year of the Zhou dynasty (i.e., 977 BCE). The floruit of Moses (乜攝; Miēshè) in turn is set in the 613th year of the same dynasty, namely around 510 BCE. In their prayers and liturgy, the traditional community followed Talmudic usage, celebrating all the Jewish festivals, observing the prayers, rituals and days of fasting variously associated with the Jewish Sabbath, Yom Kippur, Rosh Hashanah, Passover, Shavuot, Sukkot, Hanukkah, Purim and Tisha B'Av. Within a few centuries, nonetheless, practices regarding the coming of age ceremony, wedding and death and burial were acclimatized to the respective Chinese customs, though the text of the Kaddish in the Memorial Book suggests the prayer was recited at funerals. By sometime after the mid 19th century all of these practices appear to have been abandoned, including the observance of the Sabbath.[ad] Outside the synagogue was a large hall, the Tz'u t'ang (祖堂; zǔ táng) or "Hall of the Ancestors" where, according to the Portuguese Jesuit Jean-Paul Gozani (1647–1732) who lived in Kaifeng from 1698 to 1718, incense bowls were placed to commemorate the patriarchs and outstanding figures of the Law, as well as various holy men (聖人; shèngrén). This was similar to Chinese rites regarding ancestors, with the difference that no images were allowed. Their Pentateuch was divided into 53 sections according to the Persian style. The existence of Jews in China was unknown to Europeans until 1605, when Matteo Ricci, then established in Beijing, was visited by a Chinese official from Kaifeng.[ae] According to the account in De Christiana expeditione apud Sinas, Ricci's visitor, named Ai Tian (艾田; Ài Tián), was a chüren (舉人; jǔrén) – someone who had passed the provincial level of the imperial examination decades earlier in 1573. Ai Tian explained that he was a member of a 1,000 strong Israelite congregation that worshipped one God. They were unfamiliar with the word "Jew" (yóutài)[af] which, according to Zhang Ligang, first appeared in the 1820s when a German missionary used this translated name of "Jews Country" in a journal.[ag] When he saw a Christian image of The Madonna, Mary with Jesus and John the Baptist, he took it to be a representation of Rebecca with her children Jacob and Esau. Ai said that many other Jews resided in Kaifeng; they had a splendid synagogue (礼拜寺; Lǐbàisì), and possessed a great number of written materials and books. Ricci wrote that "his face was quite different to that of a Chinese in respect to his nose, his eyes, and all his features". This has been taken to allow an inference that, up to that time, the Kaifeng Jews had still largely shunned intermixing and were thus physically distinguishable from the surrounding population. About three years after Ai's visit, Ricci sent a Chinese Jesuit lay brother to visit Kaifeng; he copied the beginnings and ends of the holy books kept in the synagogue, which allowed Ricci to verify that they indeed were the same texts as the Pentateuch known to Europeans, except that they did not use Hebrew diacritics (which were a comparatively late invention). When Ricci wrote to the "ruler of the synagogue" in Kaifeng, telling him that the Messiah the Jews were waiting for had come already, the archsynagogus wrote back, saying that the Messiah would not come for another ten thousand years. Nonetheless, apparently concerned with the lack of a trained successor, the old rabbi offered Ricci his position, if the Jesuit would join their faith and abstain from eating pork. Later, another three Jews from Kaifeng, including Ai's nephew, stopped by the Jesuits' house while visiting Beijing on business, and got themselves baptized. They told Ricci that the old rabbi had died, and (since Ricci had not taken him up on his earlier offer), his position was inherited by his son, "quite unlearned in matters pertaining to his faith". Ricci's overall impression of the situation of China's Jewish community was that "they were well on the way to becoming Saracens [i.e., Muslims] or heathens." Father Joseph Brucker stated that Ricci's account of Chinese Jews indicated that there were only in the range of ten or twelve Jewish families in Kaifeng in the late 16th to early 17th centuries) In the Jesuits' manuscripts it was also stated that there was a greater number of Jews in Hangzhou. The Kaifeng Jewish community's isolation from other Jewish communities and marriages with Han Chinese and Islamic Chinese resulted in a decreased emphasis on Jewish identity and tradition.: 37 With some Kaifeng families, Muslim men did marry their Jewish women, while Muslim women did not marry the Jews.[ah][ai] In 1849, an observer who had contact with the Kaifeng Jewish community noted that "the Jews are quite Chinese in appearance." The Taiping Rebellion of the 1850s led to the dispersal of the community, but it later returned to Kaifeng. To avoid the threat of becoming defunct, the Kaifeng community dispatched members to Shanghai in order to seek help from Sephardic European Jewish merchants active there. The funds that were collected to this end were diverted to assist an influx of Russian Jews fleeing pogroms. Shanghai's Baghdadi Jewish community attempted to instruct Kaifeng Jews in the Jewish religious teachings and ritual.: 37 The firm of S. H. Sassoon took two Kaifeng brothers in flight from the Taiping rebels under their wing and had them sent to Bombay where they underwent circumcision. One died within two years but the other, Feba, was renamed Shalem Sholome David, and was employed by the Sassoons in their Shanghai office (1872–1882). In 1883 he married a Baghdadi Jewish woman, Habiba Reuben Moses, and became a respected member of the Jewish community in Bombay. During the Boxer rebellion the Bombay community offered to subsidize the relocation of Kaifeng Jews to Shanghai. The dismantlement of the synagogue sometime between 1850 and 1866 led to the community's demise. By the turn of the 19–20th century members of the community had fallen into dire poverty. The Zhang Kaifeng Jewish family had largely converted to Islam by this time.[aj] The site of the synagogue had turned into a fetid swamp. Much material belonging to it, even the roof tiles, was purchased by Muslims and others: two young Kaifeng Jews sold three Torahs to two Americans and an Austrian. Some property was also said to have been stolen. The Ark of the Sefer Torah was reportedly seen in a mosque. The site itself was apparently bought by Bishop White in 1914, and in 1954, the Chinese Communist government confiscated the property and built the Kaifeng Municipal Clinic (today 开封市中医院南院) on it. Some usages remained.[clarification needed] Burial coffins maintained a distinctive shape from those customary for Chinese.[ak] Kaifeng Jewish ancestry has been found among their descendants living among the Hui Muslims. Scholars have pointed out that Hui Muslims may have absorbed Kaifeng Jews instead of Han Confucians and Buddhists.[al] Kaifeng Chinese had difficulty in distinguishing Jews and Muslims, and spoke of the former as "ancient Islam" (回回古教; huíhuí gǔjiào). The blue hat Hui also referred to Jews converting to Islam.[am] Jin clan descendants also came to believe they were Muslims. Instead of being absorbed into Han, a portion of the Jews of China of Kaifeng became Hui Muslims.[an] In 1948, Samuel Stupa Shih (Shi Hong Mo) (施洪模) said he saw a Hebrew language "Religion of Israel" Jewish inscription on a tombstone in a Qing dynasty Muslim cemetery to a place west of Hangzhou.[ao] By Ricci's time, it is said that the Nanjing and Beijing Jews had become Muslims, though a Jewish community and synagogue still existed in Hangzhou. The Kaifeng Jews are not recognized as a minority among the 55 ethnic groups which have been granted this official status in China. Their bid to be so listed in 1953 was turned down by the Chinese government. Their registration as "Jewish descendants" (猶太後代; Yóutàihòudài) was changed to Han Chinese (漢; Hàn) out of official concerns that an ethnic status might lead them to seek privileges. What little remains of their material Jewish heritage has been locked away by Chinese authorities in a special room in the Kaifeng museum, ostensibly for the protection of their heritage or is conserved in the Dongda mosque (東大寺; Dōngdàsì), where the relics are inaccessible. Family papers and heirlooms were reportedly discarded or burnt out of fear of the Red Guards during the Chinese Cultural Revolution. In 1980 during a hajj pilgrimage the Hui Muslim woman Jin Xiaojing (金效靜) realized she had Jewish roots. The Portland Rabbi Joshua Stampfer (1921–2019), on a visit to Kaifeng in 1983, estimated there were from 100 to 150 descendants of Kaifeng Jews, and provided the means for Jin Xiaojing's daughter, Qu Yinan, then a Beijing journalist, to study Judaism and Hebrew in California where she became the first of the Kaifeng community to be reconverted back to the religion of her ancestors. Qu Yinan's family abstained from certain foods, such as shellfish and pork, similar to the stipulations of kosher dietary law, which marked them off from most neighbouring Chinese. She had been under the impression her family was Muslim, who likewise abstain from pork, and her grandfather, like them, had worn a skullcap, only blue as opposed to the white cap worn donned by local Muslims. Writing in 1987 Daniel Elazar suggested it would be difficult to maintain that contemporary Kaifeng Chinese of Jewish descent are Jews. Proposals to establish a Museum commemorating their history despite the city's lack of Jewish artifacts and documents, have received enthusiastic local government backing, which considers that such a centre would have positive effects on the local economy via Jewish tourism. Elazar opines that, over the ensuing decades, Western Jews will manage to encourage the growth of Chinese Jews among the descendant population[ap] The establishment of diplomatic relations between China and Israel in 1992 rekindled interest in Judaism and the Jewish experience. It is difficult to estimate the number of Jews in China, population count often having to fluctuate constantly due to changes in official attitudes. A survey in the 80s suggested 140 families in China bore six of the traditional Jewish surnames, with 79 in Kaifeng amounting to 166 individuals. The last official census revealed about 400 official Jews in Kaifeng, now estimated at some 100 families totalling approximately 500 people. Up to 1,000 residents have ties to Jewish ancestry, though only 40 to 50 individuals partake in Jewish activities. Within the framework of contemporary rabbinic Judaism, matrilineal transmission of Jewishness is predominant, while Chinese Jews based their Jewishness on patrilineal descent. This has been attributed to the influence of Chinese cultural norms, where lines of descent are typically patrilineal. The Jewish sinologist Jordan Paper notes, however, that all genealogies in the Torah consist exclusively of male descent. The modern assumption that Judaism is matrilineal has been used, he adds, to deny the authenticity of Chinese Jews because their clan lineages were patrilineal.[aq] Kaifeng Jews are not recognized as Jews by birth and are required to formally convert to Judaism in order to receive Israeli citizenship. Some desire to reconnect with Judaism and some say their parents and grandparents told them that they were Jewish and would one day "return to their land". Under Israel's Law of Return, aliyah requires proof of Jewish descent through at least one grandparent. Though such evidence is not available for the Kaifeng community, and strict Orthodox Jewish rabbis would question their authenticity as Jews, Shavei Israel's Michael Freund has sponsored for over a decade (2006–2016) the emigration of 19 descendants of Kaifeng Jews to Israel, where they have studied variously Hebrew in ulpanim and a yeshiva in preparation for conversion to Judaism. In the 21st century, both the Sino-Judaic Institute and Shavei Israel sent teachers to Kaifeng to help interested community members learn about their Jewish heritage, building on the pioneering work of the American Judeo-Christian Timothy Lerner. Advocates for the descendants of the Kaifeng Jews are exploring ways to convince the Chinese authorities to recognize the antiquity of the Kaifeng Jews and allow them to practice their Chinese Jewish way of life. Since 2015, descendants of the Kaifeng Jews have come under increased pressure and suspicion by the Chinese government. Kaifeng manuscripts Several Kaifeng Torah scrolls survive, housed in collections in the British Library and elsewhere. A number of surviving written works are housed at Hebrew Union College's Klau Library in Cincinnati, Ohio. Among the works in that collection are a siddur (a Jewish prayer book) in Chinese characters[failed verification] and a Hebrew codex of the Haggadah. The codex is notable in that, while it ostensibly contains vowels, it was clearly copied by someone who did not understand them. While the symbols are accurate portrayals of Hebrew vowels, they appear to be placed randomly, thereby rendering the voweled text as gibberish. Since Modern Hebrew is generally written without vowels, a literate Hebrew speaker can disregard these markings, as the consonants are written correctly, with few scribal errors. Also at the Klau Library is a haggadah from the 17th century and another from the 18th century, one written in Jewish-Persian hand, the other in Chinese Hebrew square script (like that of the Torah scrolls), using text primarily from an early stage of the Persian Jewish rite. A recent study of the text has a facsimile of one manuscript and a sample of the other, the full text of the Hebrew/Aramaic and Judeo-Persian haggadah (in Hebrew characters), as well as an annotated English translation. Xun Zhou, a research fellow at SOAS expressed doubts about the authenticity of the Kaifeng community, arguing that it was a construct of Christian-driven Orientalism, powered by the evangelical interests of James Finn and his two works on the question: The Jews in China (1843) and The Orphan Colony of Jews in China (1874). Finn relied on the accounts of the 17th century Jesuit missionaries. Zhou maintained that the community had no Torah scrolls until 1851, when they suddenly appeared to be sold to eager Western collectors. She also stated that drawings of the synagogue were doctored in the West because the original did not look like one, and that the Kaifeng community claimed to have kept some Jewish practices since before they are known to have begun. Xun Zhou posited that the Kaifeng community was not Jewish in any meaningful sense. Her hypothesis has not found support within the scholarly community.[ar] In an overview of the place of Kaifeng Jews within the broader context of Jewish history, Simon Schama notes its exceptionality to the tragic diffidence of host societies to Jewish settlements:- To survey the predicament of Jews in much of the rest of the world is to marvel at what the Kaifeng community escaped. In China, Jews were not subjected to violence and persecution, not demonized as God killers. Their synagogues were not invaded by conversionary harangues. They were not physically segregated from non-Jews nor forced to wear humiliating forms of identification on their dress. They were not forced into the most despised and vulnerable occupations, not stigmatized as grasping and vindictive, and portrayed neither as predatory monsters nor pathetic victims.[as] Genetics Evidence garnered from testing the Y-chromosome of members of the Kaifeng Jewish community shows their genetic relationship to Bukharan Jews from Uzbekistan through their shared haplogroup R-FT14557 and to Mizrahi Jews from Iraq through their shared haplogroup J-FTF9916. This adds to other evidence that Kaifeng Jews are authentic ethnic Jews who descend in part from the ancient Israelites. Books and films The American novelist Pearl S. Buck, raised in China and fluent in Chinese, set one of her historical novels (Peony) in a Chinese Jewish community. The novel deals with the cultural forces which are gradually eroding the separate identity of the Jews, including intermarriage. The title character, the Chinese bondmaid Peony, loves her master's son, David ben Ezra, but she cannot marry him due to her lowly status. He eventually marries a high-class Chinese woman, to the consternation of his mother, who is proud of her unmixed heritage. Descriptions of remnant names, such as a "Street of the Plucked Sinew", and descriptions of customs such as refraining from the eating of pork, are prevalent throughout the novel. The Broadway musical Chu Chem is a fictional tale which revolves around the Kaifeng Jewish community. In the show, a group of European actors joins a troupe of Chinese performers in order to present the story of Chu Chem, a scholar who journeys to Kaifeng with his wife Rose and his daughter Lotte because he wants to learn about his ancestors and find a husband for Lotte. In his 1992 documentary series Legacy, writer Michael Wood traveled to Kaifeng and walked down a small lane known as the "alley of the sect who teach the Scriptures", that is, the alley of the Jews. He mentioned that there are still Jews in Kaifeng today, but they are reluctant to reveal themselves "in the current political climate". The documentary's companion book further states that one can still see a "mezuzah on the door frame, and the candelabrum in the living room". A recent documentary, Minyan in Kaifeng, covers the present-day Kaifeng Jewish community in China during a trip to Kaifeng which was taken by Jewish expatriates who met for weekly Friday night services in Beijing; upon learning about the Jews of Kaifeng, the members of the expatriate Jewish community decided to travel to Kaifeng in order to meet some of the descendants of the Kaifeng Jews and hold a Shabbat service. See also Notes Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Queen%27s_University_at_Kingston] | [TOKENS: 9324] |
Contents Queen's University at Kingston Queen's University at Kingston, commonly known as Queen's University or simply Queen's, is a public research university in Kingston, Ontario, Canada. Queen's holds more than 1,400 hectares (3,500 acres) of land throughout Ontario and owns Herstmonceux Castle in East Sussex, England. Queen's is organized into eight faculties and schools. The Church of Scotland established Queen's College in October 1841 via a royal charter from Queen Victoria. The first classes, intended to prepare students for the ministry, were held 7 March 1842, with 15 students and two professors. In 1869, Queen's was the first Canadian university west of the Maritime provinces to admit women. In 1883, a women's college for medical education affiliated with Queen's University was established after male staff and students reacted with hostility to the admission of women to the university's medical classes. In 1912, Queen's ended its affiliation with the Presbyterian Church, and adopted its present name. During the mid-20th century, the university established several faculties and schools and expanded its campus with the construction of new facilities. Queen's is a co-educational university with more than 33,842 students and over 131,000 alumni living worldwide. Notable alumni include government officials, academics, business leaders and 62 Rhodes Scholars. As of 2022, five Nobel Laureates and one Turing Award winner have been affiliated with the university. The university funds several magazines and journals, among which are the Queen's Quarterly that has been published since 1893. History Queen's was a result of an outgrowth of educational initiatives planned by Presbyterians in the 1830s. A draft plan for the university was presented at a synod meeting in Kingston in 1839, with a modified bill introduced through the 13th Parliament of Upper Canada during a session in 1840. On 16 October 1841, a royal charter was issued through Queen Victoria establishing Queen's College at Kingston. Queen's resulted from years of effort by Presbyterians of Upper Canada to found a college for the education of ministers in the growing colony and to instruct youth in various branches of science and literature. They modelled the university after the University of Edinburgh and the University of Glasgow. Classes began on 7 March 1842, in a small woodframe house on the edge of the city with two professors and 15 students. The college moved several times during its first eleven years, before settling in its present location. Prior to Canadian Confederation, the Presbyterian Church in Scotland, the Canadian government, and private citizens financially supported the college. In 1869, Queen's was the first Canadian university west of the Maritime provinces to admit women. After Confederation, the college faced ruin when the federal government withdrew its funding and the Commercial Bank of the Midland District collapsed, a disaster which cost Queen's two-thirds of its endowment. The college was rescued after Principal William Snodgrass and other officials created a fundraising campaign across Canada. The risk of financial ruin worried the administration until the century's final decade. They considered leaving Kingston and merging with the University of Toronto as late as the 1880s. With the additional funds bequeathed from Queen's first major benefactor, Robert Sutherland, the college staved off financial failure and maintained its independence. Queen's was given university status on 17 May 1881. In 1880, three women were admitted to the university's medical degree courses; however, their presence was met with such hostility by male students and staff that the university decided to expel the women in 1883. A Women's Medical College was founded to enable the three students to complete their studies. Theological Hall, completed in 1880, originally served as Queen's main building throughout the late 19th century. In 1912, Queen's separated from the Presbyterian Church of Scotland and changed its name to Queen's University at Kingston. Queen's Theological College remained in the control of the Presbyterian Church in Canada, until 1925, when it joined the United Church of Canada. The theological college merged with the Queen's department of religious studies, and the program closed in 2015. The university faced another financial crisis during World War I from a sharp drop in enrolment due to the military enlistment of students, staff, and faculty. A $1,000,000 fundraising drive and the armistice in 1918 saved the university. Approximately 1,500 students fought in the war and 187 died. On 18 August 1938, a year prior to the start of World War II, US President Franklin D. Roosevelt came to Queen's to accept an honorary degree. In a broadcast heard around the world, the President voiced the American policy of mutual alliance and friendship with Canada. During World War II, 2,917 graduates from Queen's served in the armed forces, suffering 164 fatalities. The Memorial Room in Memorial Hall of the John Deutsch University Centre lists Queen's students who died during the world wars. Queen's grew quickly after the war, propelled by the expanding postwar economy and the demographic boom that peaked in the 1960s. From 1951 to 1961, enrolment increased from just over 2,000 students to more than 3,000. The university embarked on a building program, constructing five student residences in less than ten years. After the reorganization of legal education in Ontario in the mid-1950s, Queen's Faculty of Law opened in 1957 in the new John A. Macdonald Hall. Other construction projects at Queen's in the 1950s included the construction of Richardson Hall to house Queen's administrative offices and Dunning Hall. By the end of the 1960s, like many other Canadian universities, Queen's tripled its enrolment and greatly expanded its faculty, staff, and facilities, as a result of the baby boom and generous support from the public sector. By the mid-1970s, the university had 10,000 full-time students. Among the new facilities were four more high capacity residences: An Clachan, Elrond College (currently Princess Towers), a cooperative residence that the university no longer owns, John Orr Tower situated on the west campus, and Jean Royce Hall. In addition to this new facilities consisted of separate buildings for the Departments of Mathematics, Physics, Biology and Psychology, Social Sciences and the Humanities. During this period, Queen's created the Schools of Music, Public Administration (now part of Policy Studies), Rehabilitation Therapy, and Urban and Regional Planning. The establishment of the Faculty of Education in 1968 on land about a kilometre west of the university inaugurated the university's west campus. Queen's was an early pioneer of computer assisted legal research; it was the home of the QUIC/LAW Canadian legal research project from 1968 to 1973, when the project was spun off and commercialized. QUIC/LAW's software was licensed to West Publishing in 1976 as the foundation for the Westlaw database, and then the entire Canadian law database, by then known as Quicklaw, was sold to West's archrival LexisNexis in 2002. The first female chancellor of Queen's University, Agnes Richardson Benidickson, was installed on 23 October 1980. Queen's celebrated its sesquicentennial anniversary in 1991, and Charles, Prince of Wales, and his then wife, Diana, visited the campus to mark the occasion. The Prince of Wales presented a replica of the 1841 Royal Charter granted by Queen Victoria, which had established the university; the replica is displayed in the John Deutsch University Centre. In 1993, Queen's received Herstmonceux Castle as a donation from alumnus Alfred Bader. The university uses the castle to house the Bader College. In 2001, the Senate Educational Equity Committee (SEEC) studied the experiences of visible minority and Aboriginal faculty members at Queen's after a black female professor left, alleging she had experienced racism. Following this survey the SEEC commissioned a study which found many perceived a 'culture of Whiteness' at the university. The report concluded "white privilege and power continues to be reflected in the Eurocentric curricula, traditional pedagogical approaches, hiring, promotion and tenure practices, and opportunities for research" at Queen's. The university's response to the report is the subject of continuing debate. The administration implemented measures to promote diversity beginning in 2006, such as the position of diversity advisor and the hiring of "dialogue monitors" to facilitate discussions on social justice. In May 2010, Queen's University joined the Matariki Network of Universities, an international group of universities created in 2010, which focuses on strong links between research and undergraduate teaching. In response to the COVID-19 pandemic in Ontario, the university received $440,000 from the Government of Canada to increase uptake of COVID-19 vaccines among health care providers, community organizations and vulnerable individuals who are vaccine hesitant. In July 2021, former senator Murray Sinclair began his term as the 15th chancellor of Queen's. He was succeeded by broadcast journalist Shelagh Rogers in July 2024. In 2023, the university disclosed a projected deficit of $48 million for the 2024 year. Stringent measures were unveiled by Provost Matthew Evans in response to the deficit, which included a cap on class size and a hiring freeze. The measures caused a backlash among faculty and students, with the latter organizing protests. Evans faced significant criticism for his handling of the crisis, which was widely covered in both local and national publications in Canada. Campus The university grounds lies within the neighbourhood of Queen's in the city of Kingston, Ontario. The university's main campus is bordered to the south by Lake Ontario and Kingston General Hospital, city parks to the east, and by residential neighbourhoods, known as the University District, Kingston, in all other directions. The campus grew to its present size of 40 ha (99 acres) through gradual acquisitions of adjacent private lands and remains the university's largest landholding. In addition to its main campus in Kingston, Queen's owns several other properties around Kingston, as well as in Central Frontenac Township, Ontario; Rideau Lakes, Ontario; and East Sussex, England. Queen's University is situated on traditional Anishinaabe and Haudenosaunee territory. The buildings at Queen's vary in age from Summerhill, which opened in 1839, to Mitchell Hall, which opened in 2018. Grant Hall, completed in 1905, is considered the university's most recognizable landmark. It is named after Reverend George Munro Grant, who served as Queen's seventh principal. The building is used to host concerts, lectures, meetings, exams, and convocations. Two buildings owned and managed by the university have been listed as National Historic Sites of Canada. The Kingston General Hospital is the oldest operating public hospital in Canada. The Roselawn House, which is east of the west campus, is the core component of the university's Donald Gordon Centre. Queen's University Libraries include six campus libraries and archives in six facilities housing 2.2 million physical items and 400,000 electronic resources, including e-books, serial titles and databases. The library's budget in 2007–2008 was $18.1 million, with $9.8 million dedicated to acquisitions. The libraries are Bracken Health Sciences Library, Education Library, Lederman Law Library, Stauffer Humanities and Social Sciences Library, and Engineering & Science Library. The W.D. Jordan Rare Books and Special Collections Library notably harbors early dated books from 1475 to 1700. The Engineering & Science Library and the W.D. Jordan Rare Books and Special Collections Library share facilities, known as Douglas Library. Since 1981, the Queen's University archives has been housed in Kathleen Ryan Hall. The archive manages, preserves, conserves, and makes accessible the information assets and historical record of the university. In addition to the university's archive, Kathleen Ryan Hall also houses the City of Kingston's archives. Queen's operates the Miller Museum of Geology, an earth-science teaching museum which features an Earth Science and Geological Collections of 10,000 minerals and 865 fossils, as well as an exhibit of the geology of the Kingston area. The museum is largely used as an earth-science teaching museum for local schools and natural-science interest groups in eastern Ontario. The permanent exhibits feature dinosaurs, dinosaur eggs, fossils of early multi-celled animals, and land tracks fossilized from 500 million years ago. Queen's art collections are housed at the Agnes Etherington Art Centre. The art centre owes its namesake to Agnes Etherington, whose house was donated to the university and is used as an art museum, attached to the main art centre. Opened in 1957, it contains over 14,000 works of art, including works by Rembrandt and Inuit art. The university's student body and faculty run the Union Gallery, an art gallery opened in 1994. The gallery is dedicated to the promotion of student and contemporary art. The university has 18 student residences: Adelaide Hall, Ban Righ Hall, Brant House, Chown Hall, Gordon House, Brockington House, Graduate Residence, Harkness Hall, John Orr Tower Apartments, Leggett Hall, Leonard Hall, McNeill House, Morris Hall, Smith House, Victoria Hall, Waldron Tower, Watts Hall, and Jean Royce Hall. The largest is Victoria Hall, built in 1965, which houses nearly 900 students. In September 2010, 83.3% of first-year students lived on campus, part of the 26% of the overall undergraduate population who lived on campus. Residents were represented by two groups, the Main Campus Residents' Council, which represents the main campus, and the Jean Royce Hall Council, which represents the west campus (Jean Royce Hall, Harkness International Hall, and the Graduate Residence). They were responsible for representing resident concerns, providing entertainment services, organizing events, and upholding Residence Community Standards. In 2013, the Main Campus and Jean Royce Hall Residents' Councils were amalgamated into one organization, called ResSoc, standing for Residence Society. ResSoc employs 7 Executives, 17 House Presidents, and 27 Residence Facilitators. ResSoc also has over 100 volunteer positions such as floor representatives and executive interns. In 2013, The Residence Society introduced the StAR (Student Appreciation in Residence) Positive Recognition program. The program encourages positive behaviour in residence and recognizes individuals who help others in need. Recipients are given a certificate as well as remuneration for their contributions. The Student Life Centre is the centre of student governance and student-directed social, cultural, entertainment, and recreational activities. It consists of the John Deutsch University Centre (JDUC), Grey House, Carruthers Hall, Queen's Journal House, MacGillivray-Brown Hall, and the non-athletic sections of Queen's Centre. Collectively, these buildings provide 10,500 square metres (113,000 sq ft) of space to the Queen's community. The JDUC contains the offices of a number of student organizations, including the Alma Mater Society of Queen's University (AMS) and the Society of Graduate and Professional Students (SGPS), as well as retail and food services. The university has 21 food outlets throughout the campus, as well as three major residence dining facilities. Queen's has off-campus facilities in the Kingston area and abroad. The university has a second campus in Kingston, known as the west campus. Acquired in 1969, the west campus is 2 km (1.2 mi) west of the main campus, and covers 27 ha (67 acres) of land. It has two student residences, the Faculty of Education, the Coastal Engineering Lab, and several athletic facilities, including the Richardson Memorial Stadium. In May 2007, the university approved the designs for the Isabel Bader Centre for Performing Arts, also in Kingston. The centre, home of the Department of Film and Media, opened in September 2014. The university owns a research facility in Rideau Lakes, Ontario, known as the Queen's University Biological Station. Opened during the 1950s, the field station encompasses approximately 3,000 ha (7,400 acres) of property, a range of habitat types typical of Eastern Ontario, and many species of conservation concern in Canada. Queen's has an agreement with Novelis Inc. to acquire a 20-hectare (49-acre) property next to the company's research and development centre in Kingston. The agreement is part of the plan to establish Innovation Park at Queen's University, an innovative technology park at the corner of Princess and Concession Streets. The property was acquired for $5.3 million, a portion of the $21 million grant Queen's received from the Ontario government in 2007 to pioneer this innovative new regional R&D "co-location" model. Queen's leases approximately 7,900 square metres (85,000 sq ft) of the Novelis R&D facilities to accommodate faculty-led research projects that have industrial partners and small and medium-size companies with a research focus and a desire to interact with Queen's researchers. The remainder of the government funds support further development of the technology park to transform the property into a welcoming and dynamic site for business expansion and relocation. Bader College is housed in Herstmonceux Castle, East Sussex, England, which was donated to Queen's in 1993 by alumnus Alfred Bader. Bader College is academically fully integrated with Queen's, although financially self-sufficient. Its mission is to provide academic programs for undergraduate students whose academic interests are oriented toward the United Kingdom, Europe, and the European Union; continuing-education programs for executives and other professional or "special interest" groups; a venue for conferences and meetings; a base for international graduate students and other scholars undertaking research in the United Kingdom and Europe; and an enhanced educational, social, and cultural environment for the local community, using the unique heritage of the castle. The opportunity to study at Bader College is not limited to Queen's students. Queen's has academic exchange agreements with Canadian and foreign universities. Queen's Sustainability Office, created in 2008, is charged with the university's green initiatives and creating awareness about environmental issues. The office is headed by a sustainability manager, who works with the university, external community groups, and the government. In 2009, with the signing of the Ontario Universities Committed to a Greener World agreement, Queen's pledged to transform its campus into a model of environmental responsibility. Queen's was the second Ontario university to sign the University and College Presidents' Climate Change Statement of Action for Canada in 2010. The university campus received a B grade from the Sustainable Endowments Institute on its College Sustainability Report Card for 2011. Administration The governance of the university is conducted through the Board of Trustees, the Senate, and the University Council, the first two of which of were established under the Royal Charter of 1841. The Board is responsible for the university's conduct and management and its property, revenues, business, and affairs. Ex-officio governors of the Board include the university's Chancellor, Principal, and Rector. The Board has 34 other trustees, 33 of whom are elected by the various members of the university community, including elected representatives from the student body. The representative from Queen's Theological College is now the only appointed trustee. The Senate is responsible for determining all academic matters affecting the university as a whole, including student discipline. It consists of 17 ex-officio positions granted to the principal and Vice-Chancellor, the Vice-Principals of the university, the senior dean of each faculty, dean of student affairs, the deputy provost, and the presidents of the undergraduate, graduate, and faculty associations. The Senate also consists of 55 other members, appointed or elected by various communities of the university, including elected representatives of the student body. The Royal Charter of 1841 was amended to include the University Council in 1874. The council is a composite of the Board of Trustees, senators, and an equal number of elected graduates. It serves as both an advisory and an ambassadorial body to the university as a whole and is responsible for the election of the Chancellor. Although it is not directly involved in operations, the Council may bring to the Senate or Board of Trustees any matter it believes affects Queen's well-being. The Council meets once per year, typically in May. The Chancellor is the highest officer and the ceremonial head of the university. The office was created in 1874 and first filled in 1877, although it was only enshrined in law in 1882 after its amendment into the Royal Charter of 1841. The responsibilities of the Chancellor include presiding over convocations, conferring degrees, and chairing the annual meetings of the council, and is an ex-officio officer and a voting member of the board of trustees. A person is elected to the office of Chancellor on a three-year term by the Council unless there is more than one candidate, in which case an election is conducted among Queen's graduates. The Principal, who normally is also the Vice-Chancellor, acts as the chief executive officer of the university under the authority of the Board and the Senate, and supervises and directs the academic and administrative work of the university and of its teaching and non-teaching staff. Since 1974, principals have been appointed for five-year terms, renewable subject to review. The formal authority for the appointment of the Principal rests under the Royal Charter with the Board of Trustees, although recent principals have been selected by a joint committee of trustees and senators. The office of the Vice-Chancellor has typically been held by the incumbent principal. In 1961, an amendment was secured by the Board to separate the office of Principal from that of Vice-Chancellor if it wished. The first and only person to hold the office of Vice-Chancellor but not the office of Principal was William Archibald Mackintosh. The current principal is Patrick Deane, serving as the twenty-first principal since 1 July 2019. The Rector is the third officer of the university, and serves as the highest-ranking representative of the student body. Though the first Rector took office in 1913, this role has been exclusively held by students since 1969, when the student body forced the resignation of then-Rector Senator Gratton O'Leary. Unlike the executives of the various student governments, the Rector represents all students – both undergraduate and graduate – and is elected to a three-year term, though it has become traditional for student Rectors to step down after only two years. Despite standing separately from any student government, the Rector works closely with the AMS and SGPS to represent the interests of their constituent students. This allows the Rector to, both formally and informally, act as an intermediary between students and the university administration on a range of topical, sensitive, or controversial issues. The Rector serves as one of three student representatives on the Board of Trustees (the other two being the Undergraduate and Graduate Student Trustees) and is a recognized observer at the Senate. Additionally, the Rector is often called upon to represent student interests on various committees of the Board and Senate. Finally, the Rector plays a ceremonial role at events such as convocation. The university completed the 2011–12-year with revenues of $947.7 million and expenses of $872.8 million, with an excess of revenues over expenses at $74.9 million. Government grants and student fees are the two largest sources of income for the university. As of 30 April 2022, Queen's endowment was valued at C$1,400,900,000. In 2023, Queen's disclosed a projected deficit of $62.8 million for 2024, which was later reduced to $48 million. Blaming the deficit on a tuition freeze introduced in Ontario in 2019, the university announced drastic measures, which included a hiring freeze and caps on class sizes. The university has been registered as an educational charitable organization by Canada Revenue Agency since 1 January 1967. As of 2011, the university registered primarily as a post-secondary institution, with 70% of the charity dedicated to management and maintenance. The charity has 21% dedicated to research, with the remaining 8% dedicated to awards, bursaries, and scholarships. Proceeds from the charity also go toward Queen's Theological College (as an affiliated college) and Bader College at Herstmonceaux Castle. Academics Queen's is a publicly funded research university and a member of the Association of Universities and Colleges of Canada. Full-time undergraduate programs comprise the majority of the school's enrolment, made up of 16,339 full-time undergraduate students. In 2009, the two largest programs by enrolment were the social sciences, with 3,286 full-time and part-time students, followed by engineering, with 3,097 full-time and part-time students. The university conferred 3,232 bachelor's degrees, 153 doctoral degrees, 1,142 master's degrees, and 721 first professional degrees in 2008–9. Queen's is organized into several faculties and schools. These include the Arts and Sciences, Education, Engineering and Applied Sciences, Health Sciences, Law, and Smith School of Business. Many of these faculties and schools are further organized into smaller departments, divisions, and schools. The university operates several study abroad programs, including the "First Year Program" at Bader College, and study abroad semester programs offered by the university's international programs office. Additionally, students can apply for international student exchange, with Queen's having exchange agreements with over 85 universities outside Canada. Queen's University has placed in post-secondary school rankings. In the 2022 Academic Ranking of World Universities rankings, the university ranked 201–300 in the world and 9–12 in Canada. The 2025 QS World University Rankings ranked the university 193rd in the world and tenth in Canada. The 2024 Times Higher Education World University Rankings placed the university 251–300 in the world, and 12th in Canada. In U.S. News & World Report 2022–23 global university rankings, Queen's placed 359th, and 12th in Canada. The Canadian-based news magazine Maclean's ranked the university eighth in its 2022 Medical-Doctoral Canadian university rankings. Queen's also placed in several rankings that evaluated the employment prospects of graduates. In a 2011 survey conducted by Mines ParisTech's, they found Queen's placed 38th in the world and first in Canada for number of graduates employed as the chief executive officer (or equivalent) of Fortune 500 companies. In an employability survey published by the New York Times in October 2011, when CEOs and chairpersons were asked to select the top universities which they recruited from, the university placed 74th in the world and fifth in Canada. Queen's University is a member of the U15, a group that includes 15 Canadian research universities. In 2018, Queen's placed eleventh in Research Infosource ranking of Canadian research universities, with a sponsored research income (external sources of funding) of $207,034 million in 2017. In the same year, Queen's faculty averaged a sponsored research income of $266,100, while graduate students averaged a sponsored research income of $44,300. The federal government is the largest funding source, providing 49.8% of Queen's research budget, primarily through grants. Corporations contribute another 26.3% of the research budget. Queen's research performance has been noted by several bibliometric university rankings, which uses citation analysis to evaluates the impact a university has on academic publications. In 2019, the Performance Ranking of Scientific Papers for World Universities ranked Queen's 344thin the world and 14th in Canada. In University Ranking by Academic Performance's 2018–19 rankings, the university ranked 353rd in the world and 14th in Canada. The university operates six research centres and institutes, the Centre for Neuroscience Studies, GeoEngineering Centre, High Performance Computing Virtual Laboratory, Centre for Health Innovation, Sudbury Neutrino Observatory Institute, and the Southern African Research Centre. The Sudbury Neutrino Observatory's director, Arthur B. McDonald, is a member of the university's physics department. The observatory managed the SNO experiment, which showed the solution to the solar neutrino problem was neutrinos change flavour (type) as they propagate through the Sun. The SNO experiment proved a non-zero mass neutrino exists. This was a major breakthrough in cosmology. In October 2015, Arthur B. McDonald and Takaaki Kajita (University of Tokyo) jointly received the Nobel Prize in Physics for illustration of neutrino change identities and identification of mass. This is the first Nobel Prize awarded to a Queen's University researcher. In 1976, urologist Alvaro Morales, along with his colleagues, developed the first clinically effective immunotherapy for cancer by adapting the Bacille Calmette-Guérin tuberculosis vaccine for treatment of early stage bladder cancer. Other research facilities include the Queen's University Biological Station, the largest inland field station in Canada. The Biological Station's mandate is to provide teaching and research opportunities in biology and other related sciences, as well as the conservation of the local environment. Researchers and students have gathered at the biological station to conduct research and participate in courses spanning ecology, evolution, conservation, and environmental biology. In 2002, it became part of the United Nations–recognized Thousand Islands – Frontenac Arch Biosphere Reserve. Queen's University has a joint venture with McGill University, operating an academic publishing house known as the McGill-Queen's University Press. It publishes original peer-reviewed works in all areas of the social sciences and humanities. While the press's emphasis is on providing an outlet for Canadian authors and scholarship, the press also publishes authors throughout the world. It has over 2,800 books in print. The publishing house was known as the McGill University Press prior to amalgamating with Queen's in 1969. The requirements for admission differ among students from Ontario, students from other provinces in Canada, and international students due to the lack of uniformity in marking schemes. In 2020, 38.2% of applications to full-time first-year studies were accepted. In 2013, the secondary school average for full-time first-year students at Queen's was 89% overall, with the Commerce, Education, and Engineering faculties having the highest entrance averages, at 91.7%, 90.8%, and 90.6% respectively. The application process emphasizes the optional Personal Statement of Experience. The statement expresses how the applicant's personal experiences may contribute to the university. It focuses on qualifications and involvement outside of academics and is an important factor in determining admission. Several faculties require applicants to submit a supplementary essay. Students may apply for financial aid such as the Ontario Student Assistance Program and Canada Student Loans and Grants through the federal and provincial governments. The financial aid provided may come in the form of loans, grants, bursaries, scholarships, fellowships, debt reduction, interest relief, and work programs. In the 2010–11 academic year, Queen's provided $36.5 million worth of student need–based and merit-based financial assistance. Student life The student body of Queen's is represented by two primary students' unions, the Alma Mater Society (AMS) for all undergraduate students – as well as Medicine and MBA students – and the Society of Graduate and Professional Students for graduate and law students. The AMS of Queen's University is the oldest undergraduate student government in Canada. It recognizes more than 200 student clubs and organizations. All accredited extracurricular organizations at Queen's fall under the jurisdiction of either the AMS or the Society of Graduate and Professional Students. The organizations and clubs accredited at Queen's cover a wide range of interests, including academics, culture, religion, social issues, and recreation. The oldest accredited club at Queen's is the Queen's Debating Union, which was formed in 1843 as the Dialectic Society. The Dialectic Society served as a form of student government until the AMS was formed from the Dialectic Society in 1858. The Queen's Bands is a student marching band, founded in 1905, consisting of the colour guard, pipe band, brass band, drum corps, highland dancers, and cheerleaders. Fraternities and sororities have been banned at the university since a ruling by the AMS in 1933. The ruling was passed in response to the formation of two fraternities in the 1920s. No accredited sororities have ever existed at Queen's. The Engineering Society (EngSoc) is the representative body for engineering students. Formed in 1897, it has 3,000 members on campus, 15,000 active alumni, and an annual budget of $1.7 million. EngSoc oversees about 45 student-run initiatives. The AMS also manages the Student Constable peer-to-peer security service at the university. It is responsible for ensuring the safety of patrons and staff at sanctioned events and venues across the campus, enforcing the governing regulations of the AMS, and upholding regulations stipulated in the Liquor Licence Act of Ontario. Student Constables do not serve as the university's primary security service. The university administration operates its own security service, which is registered in Ontario as a private security service. As of March 2012, the Student Constables are funded through a mandatory $10 fee levied on undergraduates annually by the AMS. The Agnes Benidickson Tricolour Award and induction in the Tricolour Society is the highest tribute that can be paid to a student for valuable and distinguished service to the university in non-athletic, extra-curricular activities. Queen's University's students operate a number of media outlets throughout campus. The Queen's Journal is Queen's main student newspaper. During the academic year, the journal publishes two issues a week, until the last month of the semester, when only one issue is published each week. In total the Queen's Journal publishes 28 issues a year. The newspaper was established in 1873, making it one of the oldest student newspapers in Canada. In 2013, The Journal decided to reduce its print schedule from twice a week to once a week—past volumes used to consist of 40 print issues. In supplement, for the first two months of each semester, they publish new content online throughout the week. In 2019, The Journal reduced its print schedule from once a week to once every other week. Online content continues to be published instead of a Tuesday print issue. The other weekly student publication from Queen's is Golden Words, a weekly satirical humour publication managed by the Engineering Society. Queen's student population runs a radio station, CFRC. Queen's radio station is the longest-running campus-based broadcaster in the world, and the second-longest-running radio station in the world, surpassed only by the Marconi companies. The station's first public broadcast was on 27 October 1923, when the football game between Queen's and McGill was called play-by-play. Since 2001, the station has broadcast on a 24-hour schedule. In 1980, a student-run television service called Queen's TV (QTV) was established; as of 2011, episodes aired every weekday on its website and every Wednesday on local television. In 2015, QTV was amalgamated with two other student-run services, Yearbook & Design Services (YDS) and Convocation Services, to form "Studio Q". Sport teams at Queen's University are known as the Golden Gaels. The Golden Gaels sports teams participate in the U Sports' Ontario University Athletics conference for most varsity sports. Varsity teams at Queen's include basketball, cross country, freestyle wrestling, Canadian football, ice hockey, rowing, rugby, soccer, and volleyball. The men's rugby team won the OUA Championship consecutively from 2012 to 2016. The athletics program at Queen's University dates back to 1873. With 39 regional and national championships, Queen's football program has secured more championships than any other sport team at Queen's, and more than any other football team in Canada. Queen's and the University of Toronto are the only universities to have claimed Grey Cups (1922, 1923, and 1924), now the championship trophy for the Canadian Football League. Queen's also competed for the Stanley Cup in 1894–95, 1898–99, and 1905–06. Queen's University has a number of athletic facilities open to both varsity teams and students. The stadium with the largest seating capacity at Queen's is Richardson Memorial Stadium. Built in 1971, the stadium seats 8,500 and is home to the varsity football team. The stadium has also hosted a number of international games, including Canada's second-round 2006 FIFA World Cup qualification games and the inaugural match of the Colonial Cup, an international rugby league challenge match. The stadium reopened for its inaugural football game on 17 September 2016, after an extensive revitalization. Other athletic facilities at Queen's include the Athletic and Recreation Centre, which houses a number of gymnasiums and pools; Tindall Field, a multi-season playing field and jogging track; Nixon Field, home to the school's rugby teams; and West Campus Fields, which are used by a number of varsity teams and student intramural leagues. Queen's maintains an academic and athletic rivalry with McGill University. Competition between rowing athletes at the two schools has inspired an annual boat race between the two universities in the spring of each year since 1997, inspired by the famous Oxford-Cambridge Boat Race. The football rivalry, which started in 1884, ended after Canadian university athletic divisions were reorganized in 2000; the Ontario-Quebec Intercollegiate Football Conference was divided into Ontario University Athletics and Quebec Student Sports Federation. The rivalry returned in 2002 when it transferred to the annual home-and-home hockey games between the two institutions. Queen's students refer to these matches as "Kill McGill" games and usually show up in Montreal in atypically large numbers to cheer on the Queen's Golden Gaels hockey team. In 2007, McGill students arrived in busloads to cheer on the McGill Redmen, occupying a third of Queen's Jock Harty Arena. The school also competes in the annual Old Four (IV) soccer tournament, along with McGill, the University of Toronto, and the University of Western Ontario. Insignias and other representations Queen's official colours are gold, blue, and red. Queen's colours are also used on the school flag. It displays three vertical stripes, one for each colour. In the upper-left corner on the blue stripe is a yellow crown, symbolizing the royal charter. The university also has a ceremonial flag, which is reserved for official university uses. The ceremonial flag is a square design of the Queen's coat of arms. The university also has a tartan, made up of six colours, each representing an academic discipline: blue (Medicine), red (Arts & Science), gold (Applied Science), white (Nursing Science), maroon (Commerce & MBA), light blue (Kinesiology and Physical Education), and purple (Theology). The tartan was created in 1966 by Judge John Matheson and is registered under the Scottish Tartans Authority. The coat of arms appeared as early as 1850 but was not registered with the College of Arms until 1953. The coat of arms was registered with the Scottish equivalent of the College of Arms, the Lord Lyon King of Arms, in 1981 and with the Canadian Heraldic Authority during Queen's sesquicentennial celebrations in 1991. The coat of arms is based on that of the University of Edinburgh, the institution after which Queen's was modelled. The coat of arms consists of a gold shield with red edges, divided into four triangular compartments by a blue, diagonal St. Andrew's Cross. A golden book, symbolizing learning, sits open at the centre of the cross. In each of the four compartments is an emblem of the university's Canadian and British origins: a pine tree for Canada, a thistle for Scotland, a rose for England, and a shamrock for Ireland. The border is decorated with eight gold crowns, symbolic of Queen Victoria and the university's Royal Charter. Queen's motto, from Isaiah 33:6, is Sapientia et Doctrina Stabilitas. The Latin motto is literally translated as "Wisdom and knowledge shall be the stability of thy times," and has been in use since the 1850s. A number of songs are commonly played and sung at events such as commencement, convocation, and athletic contests, including the "Queen's College Colours" (1897), also known as "Our University Yell" and "Oil Thigh", with words by A.E. Lavell, sung to the tune of "John Brown's Body". "Oil Thigh", created in 1891, consists of the old song "Queen's College Colours". The name "Oil Thigh" comes from the chorus of the song, which begins with the Gaelic words "oil thigh". The modern version of the song was crafted in 1985, when a line was changed to include Queen's women athletes in the cheer. Notable people Queen's graduates have found success in a variety of fields, heading diverse institutions in the public and private sectors. In 2011, the university had over 131,000 alumni, living in 156 countries. Queen's faculty and graduates have won many awards, including the Nobel Prize, the Turing Award, and the Victoria Cross. As of 2016, 57 Queen's students and graduates had been awarded the Rhodes Scholarship. Queen's is also a partner of the Loran Scholars Foundation, with over 20 Loran Scholars having attended the university. In 2013, the artist Raine Storey began attendance at Queen's after being the first visual artist to ever receive the award. Several Nobel laureates are associated with the university, including faculty member Arthur B. McDonald, who received the Nobel Prize in Physics for fundamental research elucidating neutrino change identities and mass; former National Research Council postdoctoral fellow at Queen's Sir Fraser Stoddart, awarded the Nobel Prize in Chemistry "for the design and synthesis of molecular machines"; and David Card, who shared the Nobel Memorial Prize in Economic Sciences in 2021 "for his empirical contributions to labour economics". Another notable individual associated with the university is Sandford Fleming, an engineer who first proposed the use of a universal time standard and the former Chancellor of Queen's. Notable alumni in the field of science include Adolfo de Bold, who won the Gairdner Foundation Award for the discovery and isolation of atrial natriuretic peptide, and Shirley Tilghman, a microbiologist and former President of Princeton University. Notable Chancellors who were once politicians, include Robert Borden, Prime Minister of Canada, and provincial premiers Peter Lougheed and Charles Avery Dunning. Roland Michener, Governor General of Canada from 1967 to 1974, served as Chancellor from 1973 to 1980. Many alumni have gained international prominence for serving in government, including Prince Takamado, member of the Imperial House of Japan, Godwin Friday, the fifth Prime Minister of Saint Vincent and the Grenadines, and Kenneth O. Hall, the fifth Governor General of Jamaica. The 29th Governor General of Canada, David Johnston, is also a graduate and former faculty member of the university. Three Canadian premiers are also alumni of Queen's: William Aberhart, the 7th Premier of Alberta, Frank McKenna, the 27th Premier of New Brunswick, and Kathleen Wynne, the 25th Premier of Ontario. The 14th Premier of Alberta, Alison Redford, also attended the university for two years. Thomas Cromwell, a Justice of the Supreme Court of Canada, is an alumnus. Prominent alumni who became leaders in business include Derek Burney, former chairman and CEO of Bell Canada; Donald J. Carty, chairman of Virgin America and Porter Airlines and former chairman and CEO of AMR Corporation; Earle McLaughlin, former president and CEO of Royal Bank of Canada; Gordon Nixon, former president and CEO of the Royal Bank of Canada; Kimbal Musk, co-founder of Zip2; and F. C. Kohli, founder of Tata Consultancy Services. Alumnus David A. Dodge was the 7th Governor of the Bank of Canada and the 13th Chancellor of Queen's. Elon Musk, founder of SpaceX, and CEO of Tesla, Inc., attended Queen's for two years. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/17/wordpress-com-adds-an-ai-assistant-that-can-edit-adjust-styles-create-images-and-more/] | [TOKENS: 818] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us WordPress.com adds an AI Assistant that can edit, adjust styles, create images, and more WordPress.com, the website hosting platform from Automattic, will now include a built-in WordPress AI assistant, the company announced on Tuesday. The feature is designed to work inside the website to understand its content and layout, allowing site owners to make changes with natural language commands. With the new tool, you can adjust the site’s layout, its style, or other patterns by issuing commands to the AI assistant. You’ll then see the changes reflected on the site as you work. These instructions don’t have to be precisely tailored prompts, either, the company notes. Instead, you can use more general language, like “make this section feel more modern or spacious,” “change my site’s colors to be brighter and bolder,” or “give me more font options that feel clean and professional.” You can also direct the AI to add or adjust your layout, instructing it to do things like “add a contact page,” or “add a testimonials section below this section.” However, the company notes that its adjustments work with block themes, not classic ones. If you’re using the latter, the assistant won’t appear in the editor. The WordPress AI assistant can update the site’s content, as well, like asking it to rewrite your bio to sound more confident, or translating a section into another language. The AI can also function like an editor, offering headline suggestions, fact checks, and other grammar and editing suggestions. This aspect is available through the block notes editor that arrived in WordPress 6.9, where you’re able to collaborate with teammates in the editor. Now you can pull the AI into that workflow by typing @ai followed by your requests. The AI will provide its answers here, including relevant links and other information where it cites external sources. Meanwhile, for help with visuals, the assistant leverages Google Gemini’s Nano Banana AI models, which can be used to either make new images or edit existing ones. With the AI helper, available as a new “Generate Image” button in the Media Library, you can specify image requirements like aspect ratios or dictate image styles. The company notes the WordPress AI Assistant is an opt-in feature users can enable if they choose. To do so, they will need to visit their Sites list once logged in, click their site name, and then Settings. Under settings, they’ll scroll down to “AI tools” and toggle the “Enable AI assistant” setting on. Customers who purchase a website with the AI website builder will have the assistant enabled automatically. Topics Consumer News Editor Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-babbageonline-25] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Federalism_in_the_United_States] | [TOKENS: 5129] |
Contents Federalism in the United States In the United States, federalism is the constitutional division of power between U.S. state governments and the federal government of the United States. Since the founding of the country, and particularly with the end of the American Civil War, power shifted away from the states and toward the national government. The progression of federalism includes dual, cooperative, and New Federalism. Early federalism Federalism is a form of political organization that seeks to distinguish states and unites them, assigning different types of decision-making power at different levels to allow a degree of political independence in an overarching structure. Federalism was a political solution to the problems with the Articles of Confederation which gave little practical authority to the confederal government. For example, the Articles allowed the Congress of the Confederation the power to sign treaties and declare war, but it could not raise taxes to pay for an army and all major decisions required a unanimous vote. The movement for federalism was greatly strengthened by the reaction to Shays' Rebellion of 1786–1787, which was an armed uprising of yeoman farmers in western Massachusetts. The rebellion was fueled by a poor economy that was created, in part, by the inability of the confederal government to deal effectively with the debt from the American Revolutionary War. Moreover, the confederal government had proven incapable of raising an army to quell the rebellion, so that Massachusetts had been forced to raise its own. The Annapolis Convention met in 1786 in which twelve delegates from five U.S. states (New Jersey, New York, Pennsylvania, Delaware, and Virginia) gathered to discuss and develop a consensus on reversing the protectionist trade barriers that each state had erected. New Hampshire, Massachusetts, Rhode Island, and North Carolina had appointed commissioners, who failed to arrive in Annapolis in time to attend the meeting, and Connecticut, Maryland, South Carolina, and Georgia had taken no action at all. The final report of the convention was adopted unanimously and sent to the Congress and to the states. It sought support for a broader constitutional convention to be held the following May in Philadelphia. It hoped that more states would be represented and that their delegates or deputies would be authorized to examine areas broader than simply commercial trade. On May 15, 1787, fifty-five delegates met at what would be known as the Constitutional Convention in the Philadelphia State House. There, the delegates debated the structure, provisions, and limitations of Federalism in what would be the Constitution of the United States. This was a clear development in federal thought. Preceding examples, such as in the Virginia Declaration of Rights, influenced the delegates whilst framing their ideas of Federal bicameral legislature (United States Congress), balanced representation of small and large states (Great Compromise), and checks and balances structures. James Madison stated in a pre-convention memorandum to the delegates that because "one could hardly expect the state legislatures to take enlightened views on national affairs", a stronger central government was necessary. Madison later wrote in Federalist No. 10 on his support for a federal government, "the smaller the number of individuals composing a majority, and the smaller the compass within which they are placed, the more easily will they concert and execute their plans of oppression. Extend the sphere, and you take in a greater variety of parties and interests; you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens". The convention had begun altering its original plan but then decided to abandon continued efforts of emendation, and officially set about constructing a new Constitution of the United States. Because George Washington lent his prestige to the Constitution and because of the ingenuity and organizational skills of its proponents, the Constitution was eventually ratified in all states. Once the convention concluded and released the Constitution for public consumption, the Federalist and Anti-Federalist movements soon began publicizing their disagreeing beliefs in local newspapers and segments. The most forceful defense of the new Constitution was The Federalist Papers, a compilation of 85 anonymous essays published in New York City to convince the people of the state to vote for ratification. These articles, written by Alexander Hamilton and James Madison, with some contributed by John Jay, examined the benefits of the new, proposed Constitution, and analyzed the political theory and function behind the various articles of the Constitution. The Federalist Papers remain one of the most important sets of documents in American history and political science. Anti-Federalists, who were opposed to the new Constitution, were generally local rather than cosmopolitan in perspective, oriented to plantations and farms rather than commerce or finance, and wanted strong state governments and a weak national government. According to political scientist James Wilson, the Anti-Federalists "were much more committed to strong states and a weak national government....A strong national government, they felt, would be distant from the people and would use its powers to annihilate or absorb the functions that properly belonged to the states." The Anti-Federalist critique soon centered on the absence of a bill of rights, which Federalists in the ratifying conventions promised to provide. Washington and Madison had personally pledged to consider amendments, realizing that they would be necessary to reduce pressure for a second constitutional convention that might drastically alter and weaken the new federal government. Madison proposed amendments that gave more rights to individuals than to states, which led to criticisms of diversion by Anti-Federalists. The outgoing Congress of the Confederation scheduled elections for the new government, and set March 4, 1789 as the date that the new government would take power. In 1789, the new Congress of the United States submitted twelve articles of amendment to the states. Ten of these articles, written by congressional committees, achieved passage on December 15, 1791, and became the United States Bill of Rights. The Tenth Amendment set the guidelines for federalism in the United States. After the first federalist movement achieved its aims in promoting the Constitution, an official Federalist Party emerged with slightly different aims. This one was based on the policies of Alexander Hamilton and his allies for a stronger national government, a loose construction of the Constitution, and a mercantile (rather than agricultural) economy. As time progressed, the factions which adhered to these policies organized themselves into the nation's first political party, the Federalist Party, and the movement's focus and fortunes began to track those of the party it spawned. While the Federalist movement of the 1780s and the Federalist Party were distinct entities, they were related in more than just a common name. The Jeffersonian or Democratic-Republican Party, the opposition to the Federalist Party, emphasized the fear that a strong national government was a threat to the liberties of the people. They stressed that the national debt created by the new government would bankrupt the country, and that federal bondholders were paid through taxes collected from honest farmers and workingmen. These themes resonated with the Anti-Federalists, the opposition to the Federalist movement of the 1780s. As Norman Risjord has documented for Virginia, of the supporters of the Constitution in 1788, 69% joined the Federalist party, while nearly all (94%) of the opponents joined the Republicans. 71% of Thomas Jefferson's supporters in Virginia were former Anti-Federalists who continued to fear centralized government, while only 29% had been proponents of the Constitution a few years before. However, James Madison, who was one of the strongest proponents of the Constitution and a member of the first federalist movement, became a Jeffersonian, while some who were Anti-Federalists prior to the ratification of the Constitution, such as Patrick Henry, became supportive of the Federalist Party. The movement reached its zenith with the election of John Adams, an overtly Federalist President. However, with the defeat of Adams in the election of 1800 and the death of Hamilton, the Federalist Party began a long decline from which it never recovered. What finally finished off the Federalist party was the Hartford Convention of 1814, in which five New England states gathered to discuss several constitutional amendments necessary to protect New England's interests in regard to the blockade of their ports by the British during the War of 1812. The threat of secession also was proposed during these secret meetings. Three delegates were sent to Washington, DC to negotiate New England's terms only to discover the signing of the Treaty of Ghent, ending the war with the British. Across the nation, Republicans used the great victory at New Orleans to ridicule the Federalists as cowards or defeatists. The Federalists were thereafter associated with the disloyalty and parochialism of the Hartford Convention and destroyed as a political force. Under the Marshall Court The United States Supreme Court under Chief Justice John Marshall played an important role in defining the power of the federal and state governments during the early 19th century. As the U.S. Constitution does not specifically define many dividing lines between the layers of government, the Supreme Court settled the issue in New York. The question was answered particularly in the cases, McCulloch v. Maryland, in which the court unanimously found that the states could not tax a federal institution that was deemed legitimate and appropriate, Gibbons v. Ogden, in which Congress was confirmed control of interstate commerce under the commerce clause instead of the states, and Marbury v. Madison, which broadly expanded the power of the national government. A notable instance in which the Marshall Court empowered the states under federalism was in that of Barron v. Baltimore, a case which resulted in Marshall's court unanimously concluding that the 5th amendment only applied to the federal government and not the states. Dual federalism Despite Chief Justice Marshall's strong push for the federal government, the court of his successor, Roger B. Taney (1835–1864), decided cases that favored equally strong national and state governments. The basic philosophy during this time was that the U.S. Government ought to be limited to its enumerated powers and that all others belonged to the states. Any powers that were not granted to the U. S. Government by the Constitution were handed over to the states through the Tenth Amendment. Dual federalism had a significant impact on social issues in the United States. Dred Scott v. Sanford was an example of how Taney's dual federalism helped stir up tensions eventually leading to the outbreak of the Civil War. Another example of dual federalism's social impact was in the Plessy v. Ferguson ruling. Dual federalism had set up that the U.S. Government could not legislate on moral issues. It was an issue that had to be decided by the states, and thus "separate but equal" could exist. Lastly, near the end of dual federalism's lifespan, both the Sixteenth and the Seventeenth Amendment bolstered the power of the national government, and divided state and federal power (Fuad Nor, 1977). Between dual federalism and the New Deal The ratification of the Fourteenth Amendment in 1868 marked a significant transfer of authority from state governments to the federal government, declaring United States citizenship paramount to state citizenship. Over time, the application of the Fourteenth Amendment and incorporation of the Bill of Rights to the states strengthened the federal government's power to protect against state intrusions upon individual rights. The 14th Amendment ensured the shielding of fundamental rights of the individual citizen against the threats presented by rights of the state by the Privileges or Immunities Clause. Still, in the immediate aftermath of the Taney court and the rise of Dual federalism, the division of labor between federal, state, and local governments was relatively unchanged for over a century. Political scientist Theodore J. Lowi summarized the system in place during those years in The End of the Republican Era This lack of change is nowhere more apparent than in Supreme Court rulings that addressed federalism against the backdrop of the laissez-faire, pro-business Gilded Age. In United States v. E.C. Knight Co. (1895), the Supreme Court continued along the path of promoting dual federalism in striking down a provision of the Sherman Antitrust Act. In an 8–1 decision, the Court ruled that Congress lacked the authority under the Commerce Clause to regulate monopolies by adopting a limited interpretation of interstate commerce, a win for states' rights. In 1918, a 5–4 majority ruled similarly in Hammer v. Dagenhart, a challenge against the constitutionality of the Federal Child Labor Act of 1915. However, by 1941, this ruling was reversed in United States v. Darby Lumber Company. The Court delivered another victory for dual federalism in Coyle v. Smith (1911), where Oklahoma's effort to relocate their capital to Oklahoma City was halted. The state agreed to keep the capital in Guthrie until at least 1913 as part of the terms of their Enabling Act of 1906, which outlined the conditions for Oklahoma's acceptance into the Union as a state. These cases illustrate the Supreme Court's consistent willingness to rule in favor of states' rights until National Labor Relations Board v. Jones & Laughlin Steel Corporation (1937), which ushered in a new era of cooperative federalism for the courts. Despite the Supreme Court's stubbornness on guarding states' rights, much of the modern federal apparatus owes its origins to changes that occurred during the period between 1861 and 1933. While banks had long been incorporated and regulated by the states, the National Bank Acts of 1863 and 1864 saw Congress establish a network of national banks that had their reserve requirements set by officials in Washington. During World War I, a system of federal banks devoted to aiding farmers was established, and a network of federal banks designed to promote homeownership came into existence in the last year of Herbert Hoover's administration. Congress used its power over interstate commerce to regulate the rates of interstate (and eventually intrastate) railroads and even regulated their stock issues and labor relations, going so far as to enact a law regulating pay rates for railroad workers on the eve of World War I. During the 1920s, Congress enacted laws bestowing collective bargaining rights on employees of interstate railroads. Congress also used the commerce power to enact morals legislation, such as the Mann Act of 1907 barring the transfer of women across state lines for immoral purposes, even as the commerce power remained limited to interstate transportation—it did not extend to what were viewed as intrastate activities such as manufacturing and mining. As early as 1913, there was talk of regulating stock exchanges, and the Capital Issues Committee formed to control access to credit during World War I recommended federal regulation of all stock issues and exchanges shortly before it ceased operating in 1921. With the Morrill Land-Grant Acts Congress used land sale revenues to make grants to the states for colleges during the Civil War on the theory that land sale revenues could be devoted to subjects beyond those listed in Article I, Section 8 of the Constitution. On several occasions during the 1880s, one house of Congress or the other passed bills providing land sale revenues to the states for the purpose of aiding primary schools. During the first years of the twentieth century, the endeavors funded with federal grants multiplied, and Congress began using general revenues to fund them—thus utilizing the general welfare clause's broad spending power, even though it had been discredited for almost a century (Hamilton's view that a broad spending power could be derived from the clause had been all but abandoned by 1840). During Herbert Hoover's administration, grants went to the states for the purpose of funding poor relief. The 1920s saw Washington expand its role in domestic law enforcement. Disaster relief for areas affected by floods or crop failures dated from 1874, and these appropriations began to multiply during the administration of Woodrow Wilson (1913–21). By 1933, the precedents necessary for the federal government to exercise broad regulatory power over all economic activity and spend for any purpose it saw fit were almost all in place. Virtually all that remained was for the will to be mustered in Congress and for the Supreme Court to acquiesce. Cooperative federalism Cooperative Federalism involves a looser interpretation of the Tenth Amendment. More specifically, it supports the idea that the Tenth Amendment does not provide any additional powers to the states. It operates under the assumption that the federal and state governments are "partners," with the federal creating laws for the states to carry out. It relies on the Supremacy Clause and the Necessary and Proper Clause as constitutional bases for its argument. Court cases such as United States v. Darby Lumber Co. and Garcia v. San Antonio Metropolitan Transit Authority expanded the role of Cooperative Federalism by forcing states to enforce federal labor laws. Although Cooperative Federalism has roots in the civil war, the Great Depression marked an abrupt end to Dual Federalism and a dramatic shift to a strong national government. President Franklin D. Roosevelt's New Deal policies reached into the lives of U.S. citizens like no other federal measure had. As the Supreme Court had rejected nearly all of Roosevelt's economic proposals, the president proposed the Judicial Procedures Reform Bill of 1937 to add more members. The expansion of the Court, which never materialized, along with a Democrat-controlled Congress would tilt Court rulings in favor of Roosevelt's policies. Lowi notes three Supreme Court cases that validated the shift in power: The national government was forced to cooperate with all levels of government to implement the New Deal policies; local government earned an equal standing with the other layers, as the federal government relied on political machines at a city level to bypass state legislatures. The formerly distinct division of responsibilities between state and national government had been described as a "layer cake," but, with the lines of duty blurred, cooperative federalism was likened to a "marble cake" or a "picket fence." In cooperative federalism, federal funds are distributed through grants in aid or categorical grants which gave the federal government more control over the use of the money. New Federalism Another movement calling itself "New Federalism" appeared in the late 20th century and early 21st century. Many of the ideas of New Federalism originated with Richard Nixon. New Federalism, which is characterized by a gradual return of power to the states, was initiated by President Ronald Reagan (1981–89) with his "devolution revolution" in the early 1980s and lasted until 2001. Previously, the federal government had granted money to the states categorically, limiting the states to use this funding for specific programs. Reagan's administration, however, introduced a practice of giving block grants, freeing state governments to spend the money at their own discretion. An example and the first case of this was Garcia v. San Antonio Metropolitan Transit Authority (SAMTA) (1985). Garcia was a worker for SAMTA and appealed that because SAMTA received federal money, that they had to abide by federal labor regulations. SAMTA argued that they did not because the money received was to be used at their own discretion and did not need to abide by federal statutes because they are locally operated and make decisions about the transit system. This gave more autonomy and power to the states by allowing them to use more discretion, not having to abide by federal regulations. Under New Federalism, the question is should the federal government constitutionally command the states to carry out federal policy. For this, the courts use the anti-commandeering principle. "The anti-commandeering doctrine says that the federal government cannot require states or state officials to adopt or enforce federal law." This became the principle by New York v. United States (1992). In this case, New York sued the federal government, questioning the authority of Congress to regulate waste management. The courts ruled that it violated the 10th amendment because congress made the state of New York commandeer to federal regulations when states already take legal ownership and liability for waste treatment. Establishing this principle, giving states more autonomy on issues that fall under their discretion. A modern-day application of this rule can be found in Murphy v. National Collegiate Athletic Association (2018). New Jersey's governor attacked the federal government's prohibition on sports gambling. The courts again used the anti-commandeering principle, allowing states to regulate sports gambling at their discretion. This is starting to become a trend because now states are passing laws on issues that are often federally prohibited or heavily regulated by Congress under the commerce clause, as in the areas of medical marijuana (Gonzales v. Raich), partial-birth abortion (Gonzales v. Carhart), gun possession (United States v. Lopez), federal police powers (United States v. Morrison, which struck down portions of the Violence Against Women Act), or agriculture (Wickard v. Filburn). The balance between state and federal power has fluctuated in the 21st century. In a 2009 Rockefeller Institute report by Martha Derthick, she argues that "the normal tendency of federal-state relations in the United States is toward centralization." About the Bush administration (2001–2009), Derthick stated "conventional federalism has survived the test of an aggressive presidency" in regards to military and emergency action, and further, the Bush administration was "in retrospect, more centralizing than militarizing." In a 2007 paper in Publius: The Journal of Federalism, Sidney Milkis and Jesse Rhodes argue that "The Republican Party has traditionally stood for 'limited government', but Bush's principal legacy for federalism is centralization of power in the federal government and the executive branch." According to Thomas L. Gais on federalism in the Obama Administration, "effort to impose central control is nothing new: GWB Administration did much the same." The federal government increased its powers under the presidency of Barack Obama (2009–2017), and to an extent, the powers of the state governments also grew. In 2011, scholar Gillian Metzger discussed that "national developments entail some preemption and new state burdens. But each also has brought with it significant regulatory and financial opportunities for the states." Metzger points out that the states had increased regulatory responsibilities under Dodd-Frank, increased responsibilities in implementing and operating federal health care legislation under the Affordable Care Act, and received additional stimulus funding. Obama took office following the 2008 financial crisis, which called for him to take action to stabilize the economy. In 2009, he subsequently introduced The American Recovery and Reinvestment Act (ARRA). This act placed a federal focus on providing stabilizing state and local budgets, financial bailouts, and ensuring jobs were secure. ARRA was seen as a significant exertion of federal power which many conservatives criticized—however, this was through a coalition that included state governments as very active participants who worked closely in drafting and implementation. According to a 2010 article by Thomas L. Gais of the Rockefeller Institute, the Obama administration had been engaged with states more heavily than any administration since the 1960s, was more reliant than ever on state action, and states had the highest proportion of government employees compared to the federal government in history up to that point. Gais labelled this "assertive federalism". The cannabis policy of the Barack Obama administration was an easing of federal enforcement, granting more rights to the states in determining the legality of marijuana. Federalism under Donald Trump (2017–2021) was more complicated. In 2020, during the coronavirus pandemic, the presidency delayed action and federal agencies faced interference from the presidency, despite the federal government traditionally dealing with matters of national importance, including natural disasters or virus outbreaks. This would suggest that Trump attempted to weaken the role of the federal government, although he also attempted to override state powers or exercise powers that the Constitution did not grant the presidency. Punitive federalism, or the punishment of states and local areas by the federal government, became an issue during the Trump administration. Goelzhauser and Konisky state that punitive federalism is exemplified most by the Trump administration's interference with California through the EPA in 2018, and the withholding of disaster relief from Puerto Rico. They further state that "the pandemic has brought on, in addition to immense human suffering, the federalism event of the century". Another issue was Trump's response to the Black Lives Matter protests, in which he took a more confrontational stance, including deploying federal troops and agents to protests, despite several states opposing this measure and the action being condemned for possible unconstitutionality. According to Thompson, Wong, and Rabe, "Trump [was] particularly aggressive in the use of executive power, or the 'administrative presidency', to pursue his goals, including executive orders and regulatory changes." However, "the forces of federalism, especially state attorneys general, governors, and legislatures, have often undercut Trump's executive initiatives and reduced their impact". The federalism of the Biden administration is an emerging discussion. One federalism topic includes the measures available to the federal government in combatting the COVID-19 pandemic, and the promotion of public health. See also Notes References and further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/French_campaign_in_Egypt_and_Syria] | [TOKENS: 12053] |
Contents French invasion of Egypt and Syria The French invasion of Egypt and Syria (1798–1801) was a military expedition to Ottoman-held Egypt and Syria led by Napoleon Bonaparte during the French Revolutionary Wars. The campaign aimed to undermine British trade routes, expand French influence, and establish a scientific and administrative presence in Egypt. Napoleon also sought to sever Britain's connection to its colonial holdings in India, with the long-term ambition of challenging British dominance in the region. Departing from Toulon in May 1798, Napoleon’s fleet, comprising around 36,000 troops, landed in Alexandria on 28 June. Advancing rapidly, he defeated the ruling Mamluks at the Battle of the Pyramids, securing control of Cairo and establishing a French administration. The campaign, however, was soon compromised by the destruction of the French fleet at Aboukir Bay by Horatio Nelson, which cut off French reinforcements and supplies. French rule faced resistance, including the Cairo uprising (1798), which was suppressed with significant casualties. Seeking to consolidate French gains, Napoleon advanced into Ottoman Syria, aiming to preempt an Ottoman counteroffensive, but his campaign ended in failure at the Siege of Acre (1799), where Anglo-Ottoman forces, supported by the Royal Navy, repelled French assaults. Recognising the strategic situation and political opportunities in France, Napoleon left Egypt in August 1799, returning to France, where he seized political power. The French army, left under Jean-Baptiste Kléber, continued to resist, but following his assassination, Jacques-François Menou assumed command and struggled to maintain control. The French were ultimately defeated by British-Ottoman forces and surrendered in 1801. The campaign had significant military, political, and intellectual consequences. Napoleon’s presence in Egypt introduced European-style governance, but it also reinforced resistance among local populations. The scientific expedition accompanying the invasion produced the Description de l'Égypte, a seminal work that laid the foundation for modern Egyptology. The discovery of the Rosetta Stone allowed for the deciphering of Egyptian hieroglyphs. The campaign also contributed to the rise of Muhammad Ali of Egypt, who later established modern Egypt. Background The Peace of Campo Formio concluded the First War of the Coalition in France's favour. This left Great Britain as the only major European power still at war with the French Republic. Following naval defeats at Cape St Vincent and Camperdown, a direct invasion of Britain became impossible, prompting the French government to explore alternative strategies to weaken British influence. The possibility of establishing French control over Egypt had been considered since François Baron de Tott conducted a secret reconnaissance mission to the Levant in 1777 to determine its feasibility. His report was favourable, but no immediate action followed. Nevertheless, Egypt became a topic of debate between Talleyrand and Napoleon Bonaparte, which continued in their correspondence during Napoleon's Italian campaign. Their correspondence reflected a growing interest in Egypt's potential as a colonial and strategic asset, particularly as a means to challenge British dominance in India. At the time Egypt had been under Ottoman sovereignty since the early 16th century. French merchants had been strongly represented there since the 15th century, and France maintained good relations with the Ottomans. However, Ismail Bey, the Mamluk Shaykh al-balad, who ruled Egypt on behalf of the Ottomans and was well-disposed towards the French, died in 1791 during an epidemic in Cairo. His rivals, Ibrahim Bey and Murad Bey, took power and took action against the French in the country. Exposed to more and more repression, the French merchants asked for intervention. France therefore had two formal reasons to intervene: Firstly, the Kingdom of France had been an ally of the Ottoman Sultan since 1536 and could claim to want to restore his authority. Secondly, since the French Revolution, France could argue that it also wanted to bring the Egyptians freedom from the yoke of feudal Mamluk rule. The decision of 1798 was a complex mixture of geostrategic, economic, political and personal interests, dressed up with the ideals of the French Revolution. Napoleon regarded the capture of Egypt as the most important step to neutralise the massive economic advantages that Great Britain derived from trade with India and to force Great Britain to make concessions. In August 1797, he wrote in a letter to the Directory: Les temps ne sont pas éloignés, où nous sentirons que, pour détruire véritablement l'Angleterre, il faut nous emparer de l'Égypt. The time is not far off when we will feel that to really destroy England we must take Egypt. — Napoleon Bonaparte, The capture of Egypt would have given the French control of the eastern Mediterranean and the Red Sea, which would have led to considerable losses for the British economy. Furthermore, a successful invasion of Egypt could have been followed by a direct attack on British territory in India. After France and Spain allied with each other in 1796, the Royal Navy was forced to withdraw from the Mediterranean in 1798. He further wished to strengthen French trade interests over those of Great Britain in the Middle East, hoping to join forces with France's ally Tipu Sultan, ruler of Mysore in India and an opponent of British control in that country. For reasons of secrecy, the president of the directorate wrote the order to Napoleon himself. It was stated that the expedition was to consist of 36,000 men from the old Italian army, officers and generals of his choice, various scientists and craftsmen. The Treasury was instructed to send Napoleon 1.5 million francs every decade. In addition, he was authorised to take 3 of the 8 million francs from the 'Bernese treasury', which France had had paid to it by the defeated Confederation for its military intervention to establish a Helvetic Republic In March 1798, the Directory took the official decision to launch the expedition to Egypt and appointed Bonaparte commander-in-chief of the Armée d'Orient. Bonaparte was tasked with first occupying Malta and then moving on to the conquest of Egypt. Once the occupation of Egypt was complete, he was to establish communications with India and secure the Red Sea, which in turn would facilitate the expulsion of the British from the Orient and a future French expedition to India. Preparations Preparations for the expedition were spread across Toulon, Marseille, Genoa, Corsica and Civitavecchia and were essentially organised by Napoleon's chief of staff Louis Berthier. Around 300 ships were requisitioned for the transport. The escort was provided by 13 ships of the line and the same number of frigates under François-Paul Brueys d'Aigalliers. By 11 May, the Orient army had grown to 30,800 infantrymen, 3,475 cavalrymen, 1,660 artillerymen, 60 field guns and 40 siege guns Including all civilians (artists and researchers), the total number was 38,000. To prevent news of the impending attack on Egypt from spreading before the fleet arrived, all merchant ships that sighted the convoy during the crossing were to be seized and detained until the French had reached Alexandria. Expedition On 19 May, Napoleon gave the order on board the flagship L'Orient to set sail from Toulon with his invasion fleet. The fleet sailed along the coast of Provence towards Genoa and from there southwards to Corsica. Until 30 May, the fleet remained within sight of the east coast, crossed the Strait of Bonifacio and then followed the coast of Sardinia with the intention of joining up with the ships coming from Civitavecchia. On 3 June, Napoleon received news of the British presence in Sardinia, whereupon he sent a squadron to reconnoitre the situation. However, after the British were not encountered, Napoleon gave the order to stop waiting for the ships from Civitavecchia and had his fleet turn south-east, passing Mazara del Vallo and Pantelleria on 7 June. There Napoleon learned that he was being pursued by the British, whereupon he set course for Malta, which he reached on 9 June and joined up with the 56 ships from Civitavecchia. The French expeditionary force was thus complete and set course for Sicily. It rounded the southern tip of Sardinia as early as 5 June. When Napoleon's fleet arrived off Malta, Napoleon demanded that the Knights of Malta allow his fleet to enter the port and take on water and supplies. Grand Master von Hompesch replied that only two foreign ships would be allowed to enter the port at a time. Under that restriction, re-victualling the French fleet would take weeks, and it would be vulnerable to the British fleet of Admiral Nelson. Napoleon therefore ordered the invasion of Malta. The French Revolution had significantly reduced the Knights' income and their ability to put up serious resistance. Half of the Knights were French, and most of these knights refused to fight. Thus Malta was conquered without much resistance. Napoleon departed Malta for Egypt. After successfully eluding detection by the Royal Navy for thirteen days, the fleet was in sight of Alexandria where it landed on 1 July, although Napoleon's plan had been to land elsewhere. On the day of the landing, Napoleon told his troops "I promise to each soldier who returns from this expedition, enough to purchase six arpents of land." (approximately 7.6 acres or 3.1 ha) and added: The peoples we will be living alongside are Muslims; their first article of faith is "There is no other god but God, and Mahomet is his prophet". Do not contradict them; treat them as you treated the Jews, the Italians; respect their muftis and their imams, as you respected their rabbis and bishops. Have the same tolerance for the ceremonies prescribed by the Quran, for their mosques, as you had for the convents, for the synagogues, for the religion of Moses and that of Jesus Christ. The Roman legions used to protect all religions. You will here find different customs to those of Europe, you must get accustomed to them. The people among whom we are going treat women differently to us; but in every country whoever violates one is a monster. Pillaging only enriches a small number of men; it dishonours us, it destroys our resources; it makes enemies of the people who it is in our interest to have as our friends. The first city we will encounter was built by Alexander [the Great]. We shall find at every step great remains worthy of exciting French emulation." Despite the idealistic promises proclaimed by Napoleon, Egyptian intellectuals like 'Abd al-Rahman al-Jabarti (1753–1825 C.E/ 1166–1240 A.H) were heavily critical of Napoleon's objectives. As a major chronicler of the French invasion, Jabarti decried the French invasion of Egypt as the start of "fierce fights and important incidents; of the momentous mishaps and appalling afflictions, of the multiplication of malice and the acceleration of affairs; of successive sufferings and turning times; of the inversion of the innate and the elimination of the established; of horrors upon horrors and contradicting conditions; of the perversion of all precepts and the onset of annihilation; of the dominance of destruction and the occurrence of occasions" Menou had been the first to set out for Egypt, and was the first Frenchman to land. Bonaparte and Kléber landed together and joined Menou at night at the cove of Marabout (Citadel of Qaitbay), on which the first French tricolour to be hoisted in Egypt was raised. On the night of the 1st of July, Bonaparte who was informed that Alexandria intended to resist him, rushed to get a force ashore without waiting for the artillery or the cavalry to land, in which he marched on Alexandria at the head of 4,000 to 5,000 men. At 2 am, 2 July, he set off marching in three columns, on the left, Menou attacked the "triangular fort", where he received seven wounds, while Kléber was in the centre, in which he received a bullet in the forehead but was only wounded, and Louis André Bon on the right attacked the city gates. Alexandria was defended by Koraim Pasha and 500 men. However, after a rather lively shooting in the city, the defenders gave up and fled. Once all the troops were ashore by 3 July, Napoleon made arrangements to leave the delta and capture Cairo, the capital of Egypt. A flotilla, loaded with provisions, cannons, ammunition and equipment, was to sail along the coast to the mouth of the Rosetta, head for the Nile and follow the army upstream from Rahmaniyyah. In order to reach Cairo before the annual flooding of the Nile, Napoleon decided to march his troops the 72 kilometres to Rahmaniyyah through the desert. When the French set off for Cairo on 6 July, the soldiers were still wearing thick woollen uniforms and their knapsacks were packed full of equipment, with the exception of water bottles. Many suffered from dysentery or eye inflammation, others were so desperate that they committed suicide. The villages marked on the maps turned out to be mostly deserted and the wells had been filled in by hostile Bedouins. On 20 July, the French army had advanced as far as Umm Dinar, 29 km north of Cairo. Observers reported that an Egyptian force under Murad Bey had gathered on the west bank of the Nile at Imbāba. Other Egyptian troops under Ibrahim Bey were on the east bank of the Nile. After Napoleon had reached the battlefield, the 6,000-strong Mamluk cavalry attacked the French at around 3.30 pm. Formed into squares, the French were able to fend off the cavalry attacks and finally counter-attack and put the Mamluks to flight. Murad withdrew with the remnants of his troops to Upper Egypt and Ibrahim, in the direction of Belbeys, in order to retreat to Syria. The battle cost the French barely a hundred dead and wounded, while the Mamluks suffered around 1,500 dead and wounded. In two proclamations to the Egyptians and the inhabitants of Cairo, Napoleon declared that the aim of the French invasion was to liberate the country from the slavery and exploitation of the Mamluk 'clan' (race) and their autocratic beys. The inhabitants, their families, their houses and property would be protected. Their way of life and religion would be respected, and dīwāne would be established for self-government, staffed by local dignitaries Dupuy's brigade pursued the routed enemy and at night entered Cairo, which had been abandoned by the beys Mourad and Ibrahim. On 4 Thermidor (22 July), the notables of Cairo came to Giza to meet Bonaparte and offered to hand over the city to him. Three days later, he moved his main headquarters there. Desaix was ordered to follow Mourad, who had set off for Upper Egypt. An observation corps was put in place at Elkanka to keep an eye on the movements of Ibrahim, who was heading towards Syria. Bonaparte personally led the pursuit of Ibrahim, beat him at Salahie and pushed him completely out of Egypt. On 1 August, the British Mediterranean fleet under Horatio Nelson discovered the French fleet under François-Paul Brueys d'Aigalliers anchored in the shallows of the Bay of Abukir near Alexandria. The French were initially unperturbed, as they assumed that the British would not begin their attack until the following morning. However, the British were determined to begin the attack that very night. The French had made a mistake and left a gap in their defence. The British ships were able to penetrate this gap and fire on the French ships from two sides. At around 10 pm, the French flagship L'Orient exploded. The battle continued into the night and only two of Brueys' ships of the line and two French frigates escaped destruction or capture by the British. News of the naval defeat reached Bonaparte en route back to Cairo from defeating Ibrahim but, far from being worried, Mullié states: This disastrous event did not disconcert Bonaparte at all – ever impenetrable, he did not allow any emotion to appear that he had not tested in his mind. Having calmly read the despatch which informed him that he and his army were now prisoners in Egypt, he said "We no longer have a navy. Well! We'll have to stay here, or leave as great men just as the ancients did". The army then showed itself happy at this short energetic response, but the native Egyptians considered the defeat at Aboukir as fortune turning in their favour and so from then on busied themselves to find means to throw off the hateful yoke the foreigners were trying to impose on them by force and to hunt them from their country. This project was soon put into execution. After the Battle of Pyramids, Napoleon instituted a French administration in Cairo and suppressed the subsequent rebellions violently. Although Napoleon tried to co-opt local Egyptian ulema, scholars like Al-Jabarti poured scorn on the ideas and cultural ways of the French.Despite their cordial proclamations to the natives, with some French soldiers even converting to Islam, clerics like Abdullah al-Sharqawi condemned the French as: materialist, libertine philosophers ... they deny the Resurrection, and the afterlife, and ... [the] prophets After the naval defeat at Aboukir, Bonaparte's campaign remained land-bound. His army still succeeded in consolidating power in Egypt, although it faced repeated nationalist uprisings, and Napoleon began to behave as absolute ruler of all Egypt. He set up a pavilion and from within it presided over a fête du Nil—it was he who gave the signal to throw into the floats the statue of the river's fiancée, his name and Mohammed's were mingled in the same acclamations, on his orders gifts were distributed to the people, and he gave kaftans to his main officers. In a largely unsuccessful effort to gain the support of the Egyptian population, Bonaparte issued proclamations that cast him as a liberator of the people from Ottoman and Mamluk oppression, praising the precepts of Islam and claiming friendship between France and the Ottoman Empire despite French intervention in the breakaway state. This position as a liberator initially gained him solid support in Egypt and later led to admiration for Napoleon from the Albanian Muhammad Ali of Egypt, who succeeded where Bonaparte had not in reforming Egypt and declaring its independence from the Ottomans. In a letter to a sheikh in August, Napoleon wrote, "I hope... I shall be able to unite all the wise and educated men of all the countries and establish a uniform regime based on the principles of the Quran which alone are true and which alone can lead men to happiness." Shortly after Bonaparte's return from facing Ibrahim came Mohammed's birthday, which was celebrated with great pomp. Bonaparte himself directed the military parades for the occasion, preparing for this festival in the sheik's house wearing oriental dress and a turban. It was on this occasion that the divan granted him the title Ali-Bonaparte after Bonaparte proclaimed himself "a worthy son of the Prophet" and "favourite of Allah". Around the same time he took severe measures to protect pilgrim caravans from Egypt to Mecca, writing a letter himself to the governor of Mecca. Even so, thanks to the taxes he imposed on them to support his army, the Egyptians remained unconvinced of the sincerity of all Bonaparte's attempts at conciliation and continued to attack him ceaselessly. Any means, even sudden attacks and assassination, were allowed to force the "infidels" out of Egypt. Military executions were unable to deter these attacks and they continued. 22 September was the anniversary of the founding of the First French Republic and Bonaparte organised the most magnificent celebration possible. On his orders, an immense circus was built in the largest square in Cairo, with 105 columns (each with a flag bearing the name of a département) round the edge and a colossal inscribed obelisk at the centre. On seven classical altars were inscribed the names of heroes killed in the French Revolutionary Wars. Two triumphal arches were built to commemorate the campaign: a wooden arc de triomphe in Azbakiyya Square, and a second arch which was inscribed with the words "There is no god but God, and Muhammad is his prophet" and decorated by the Genoese artist Michel Rigo with scenes from the Battle of the Pyramids. Here there was some awkwardness – the painting flattered the French but aggrieved the defeated Egyptians they were trying to win over as allies. On the day of the festival, Bonaparte addressed his troops, enumerating their exploits since the 1793 siege of Toulon and telling them: >From the English, famous for arts and commerce, to the hideous and fierce Bedouin, you have caught the gaze of the world. Soldiers, your destiny is fair... This day, 40 million citizens celebrate the era of representative government, 40 million citizens think of you. The speech was followed by cries of "Vive la République!" and a cannon volley. Later, Bonaparte held a feast for two hundred people in a garden in Cairo and sent soldiers to plant a French flag on the top of a pyramid. After his defeat at the Pyramids, Mourad Bey retreated to Upper Egypt. On 25 August 1798, General Desaix embarked at the head of his division on a flotilla and sailed up the Nile. On 31 August, Desaix arrived at Beni Suef where he began to encounter supply problems, then he went up the Nile to Behneseh and progressed towards Minya. The Mamluks did not fight, and the flotilla returned on September 12 at the entrance of Bahr Yussef. Desaix learned that the Mamluks were in the plain of Faiyum by 24 September. The first contact between the two sides occurred on 3 October and a second minor fight took place, which began to deplete food and ammunition of the French forces. On 7 October, Mourad Bey's troops came out of Sédiman's entrenchments and attacked the French, who formed themselves into three squares, one large and two small at its angles. The Mamluks as previous encounters attacked furiously but were repulsed.The Mamluks attempted to use their four cannons, but a vigorous attack led by Captain Jean Rapp managed to capture them. In 1798, Napoleon led the French army into Egypt, swiftly conquering Alexandria and Cairo. However, in October of that year, discontent against the French led to an uprising by the people of Cairo. While Bonaparte was in Old Cairo, the city's population began spreading weapons around to one another and fortifying strongpoints, especially at the Al-Azhar Mosque. A French commander, Dominique Dupuy, was killed by the revolting Cairenes, as well as Bonaparte's Aide-de-camp, Joseph Sulkowski. Excited by the sheikhs and imams, the local citizens swore by the Prophet to exterminate all and any Frenchman they met, and all Frenchmen they encountered – at home or in the streets – were mercilessly slaughtered. Crowds rallied at the city gates to keep out Bonaparte, who was repulsed and forced to take a detour to get in via the Boulaq gate. The French army's situation was critical – the British were threatening French control of Egypt after their victory at the Battle of the Nile, Murad Bey and his army were still in the field in Upper Egypt, and the generals Menou and Dugua were only just able to maintain control of Lower Egypt. The Ottoman peasants had common cause with those rising against the French in Cairo – the whole region was in revolt. The French responded by setting up cannons in the Citadel and firing them at areas containing rebel forces. During the night, French soldiers advanced around Cairo and destroyed any barricades and fortifications they came across. The rebels soon began to be pushed back by the strength of the French forces, gradually losing control of their areas of the city. Bonaparte personally hunted down rebels from street to street and forced them to seek refuge in the Al-Azhar Mosque. Bonaparte said that "He [i.e God] is too late – "you've begun, now I will finish!" He then immediately ordered his cannon to open fire on the Mosque. The French broke down the gates and stormed into the building, massacring the inhabitants. At the end of the revolt 5,000 to 6,000 Cairenes were dead or wounded. Syria With Egypt quiet again and under his control, Bonaparte used this time of rest to visit Suez and see with his own eyes the possibility of a canal (known as the Canal of the Pharaohs) said to have been cut in antiquity between the Red Sea and the Nile by order of the pharaohs. Before setting out on the expedition, he gave Cairo back its self-government as a token of its pardon – a new 'divan' made up of 60 members replaced the military commission. Then, accompanied by his colleagues from the Institut, Berthollet, Monge, Le Père, Dutertre, Costaz, Caffarelli, and followed by a 300-man escort, Bonaparte set out for the Red Sea and after three days' marching across the desert he and his caravan arrived at Suez. After giving orders to complete the fortifications at Suez, Bonaparte crossed the Red Sea and on 28 December moved into Sinai to look for the celebrated mountains of Moses 17 kilometres from Suez. On his return, surprised by the rising tide, he ran the risk of drowning. Arriving back at Suez, after much exploration the expedition fulfilled its aim, finding the remains of the ancient canal built by Senusret III and Necho II. In the meantime the Ottomans in Constantinople (modern-day Istanbul) received news of the French fleet's destruction at Aboukir and believed this spelled the end for Bonaparte and his expedition, trapped in Egypt. Sultan Selim III decided to wage war against France, and sent two armies to Egypt. The first army, under the command of Jezzar Pasha, had set out with 12,000 soldiers; but was reinforced with troops from Damascus, Aleppo, Iraq (10,000 men), and Jerusalem (8,000 men). The second army, under the command of Mustafa Pasha, began on Rhodes with about eight thousand soldiers. He also knew he would get about 42,000 soldiers from Albania, Constantinople, Asia Minor, and Greece. The Ottomans planned two offensives against Cairo: from Syria, across the desert of El Salheya-Bilbeis-Al Khankah, and from Rhodes by sea landing in the Aboukir area or the port city of Damietta. At the end of 1798, the most pressing problem was the rapid build-up of Ottoman troops, which the Sultan had planned for a massive attack on Egypt. One was the Rhodes army, which was transported by sea with the help of the Royal Navy. The other, the Damascus Army, advanced on Egypt via Palestine and the Sinai. While these moves were being prepared, Ahmed Pasha al-Jazzar was to advance from Acre on the Egyptian border and attract Napoleon's attention. In this precarious situation, Napoleon decided to pre-empt the attack, capture Acre, defeat the Ottomans in Syria and then return to Egypt to confront them. He prepared around 14,000 soldiers who were organised in divisions under the command of Generals Reynier Kléber, Bon, Lannes, a cavalry division under General Murat, a brigade of infantry and cavalry under Brigade chief Bessières, a camel company, artillery under Dommartin, and engineers and sappers under Caffarelli. The heavy siege artillery pieces were sent by the flotilla under contre-amiral Perrée to Jaffa. On 10 February 1799, Napoleon left Cairo for Syria. His first target was El-Arish, which finally surrendered on 19 February after an unexpected siege. Gaza fell without resistance on 25 February, and by 3 March the French had reached the outskirts of Jaffa. This city was surrounded by high walls flanked by towers. Jezzar Pasha had entrusted its defence to elite troops, with the artillery manned by 1,200 Ottoman gunners. The city was one of the ways into Syria, its port could be used by his fleet and a large part of the expedition's success depended on its fall. This meant Bonaparte had to capture the city before advancing further, and so he laid siege to it. Following the successful assault on the 7th of March, the city surrendered. A French officer's subordinate successfully persuaded 3,000 Turks in the citadel of Jaffa that they would be granted amnesty. However, Bonaparte ordered the execution of every man and of a further 1,400 prisoners. He later attempted to justify this action by claiming it was a military necessity, as he had no food for so many prisoners, could not spare them an escort, and had found paroled Turks from El-Arish serving in the garrison. However, none of these explanations have survived scrutiny. The initial successes ended before the city of Acre. With the support of the British, who reached Acre on 15 March, the city's defences were strengthened. In addition, the Royal Navy managed to muster the French flotilla transporting the ammunition and cannons. Without his siege guns, Napoleon had to resort to more time-consuming methods of besieging the city. Meanwhile, the Damascus Army was on its way to lay siege to Acre. Upon establishing contact with the opposing forces, the French forces demonstrated clear superiority in the initial stages of engagement. On 8 April, a Junot officer emerged victorious in a cavalry skirmish near Nazareth, despite being outnumbered. This was followed by a significant victory on 11 April, where Kléber, leading 1,500 men, routed a substantial force of 6,000 Turks in a battle at Canaan. In another engagement, the dashing cavalry leader Joachim Murat successfully led his troops across the Jordan River to the north of Lake Tiberias, engaging and defeating 5,000 Turks. After sixty days' repeated attacks and two murderous and inconclusive assaults, the city remained uncaptured. Even so, it was still awaiting reinforcements by sea as well as a large army forming up in Asia on the sultan's orders to march against the French. To find out the latter's movements, Jezzar ordered a general sortie against Bonaparte's camp. This sortie was supported by its own artillery and a naval bombardment from the British. With his usual impetuosity, Bonaparte pushed Jezzar's columns back against their own walls and then went to help Kléber, who was retrenched in the ruins with 4,000 Frenchmen under his command against 20,000 Ottomans at Mount Tabor. Bonaparte conceived a trick which used all the advantages offered him by the enemy position, sending Murat and his cavalry across the River Jordan to defend the river crossing and Vial and Rampon to march on Nablus, while Bonaparte himself put his troops between the Ottomans and the magazines. These manoeuvres were successful, in what was known as the Battle of Mount Tabor. The enemy army, taken by surprise at many points at once, was routed and forced to retreat, leaving their camels, tents, provisions and 5,000 dead on the battlefield. Returning to besiege Acre, Bonaparte learned that Rear-Admiral Perrée had landed seven siege artillery pieces at Jaffa. Bonaparte then ordered two assaults, both vigorously repulsed. A fleet was sighted flying the Ottoman flag and Bonaparte realised he must capture the city before that fleet arrived with reinforcements. A fifth general attack was ordered, which took the outer works, planted the French tricolour on the rampart, pushed the Ottomans back into the city and forced the Ottoman fire to relent. Acre was thus taken or about to capitulate. One of those fighting on the Ottoman side was the French émigré and engineer officer Phélippeaux, one of Bonaparte's classmates at the École Militaire. Phélippeaux ordered cannon to be placed in the most advantageous positions and new trenches dug as if by magic behind the ruins which Bonaparte's forces had captured. At the same time Sidney Smith, commander of the British fleet, and his ships' crews landed. These factors renewed the courage of the besieged and they pushed Bonaparte's force back, with stubborn fury on both sides. Faced with heavy losses in the battles around Acre, an outbreak of plague among his soldiers and the hardships caused by the heat, Napoleon after 63 days of campaigning finally had to retreat to Egypt. The French force's situation was now critical – the enemy could harass its rear as it retreated, it was tired and hungry in the desert, and it was carrying a large number of plague-sufferers. To carry these sufferers in the middle of the army would spread the disease, so they had to be carried in the rear, where they were most at risk from the fury of the Ottomans, keen to avenge the massacres at Jaffa. There were two hospital depots, one in the large hospital on Mount Carmel and the other at Jaffa. On Bonaparte's orders, all those at Mount Carmel were evacuated to Jaffa and Tantura. The gun horses were abandoned before Acre and Bonaparte and all his officers handed their horses over to the transport officer Daure, with Bonaparte walking to set an example. To conceal its withdrawal from the siege, the army set off at night. Arriving at Jaffa, Bonaparte ordered three evacuations of the plague sufferers to three different points – one by sea to Damietta, one by land to Gaza and another by land to Arish. During the retreat the army picked clean all the lands through which they passed, with livestock, crops and houses all being destroyed. Gaza was the only place to be spared, in return for remaining loyal to Bonaparte. To speed the retreat, Napoleon suggested the controversial step of euthanizing his own soldiers who were terminally ill with plague (between 15 and 50, sources vary) and not expected to recover through an opium overdose, to relieve their suffering, ease the retreat, prevent the spread of the disease and prevent the torture and executions the soldiers left behind would have received if captured by the enemy; his doctors refused to carry out such orders but there is also evidence in the form of first-hand testimonies that claim the mass euthanasia did take place, and the matter remains one for debate. Upon his return to Cairo, June 14, 1799, Napoleon made plans for departure. In an effort to exert pressure on the Directory to recall him, he sent a dispatch to Paris on 29 June, acknowledging the loss of 5,344 men and requesting 6,000 reinforcements, despite being well aware that they would not be forthcoming. On 11 August, Napoleon received word of the crisis in Europe. France was facing a coalition of England, Austria, Russia, Turkey and Naples. An Anglo-Russian army had invaded Holland and an Austro-Russian army had gained control of Switzerland; a Turco-Russian fleet had captured Corfu; and another Austro-Russian army had advanced into northern Italy undoing all of Bonaparte's work in a matter of weeks. France was reported to be on the verge of economic collapse, and royalist sentiment was running high. Campaigns in Upper Egypt The French were determined to exterminate the Mamluks or to expel them from Egypt. By that time, the Mamluks were driven out of Faiyum to Upper Egypt. General Desaix informed Bonaparte of his situation, and soon received a reinforcement of 1,000 cavalry and three light artillery pieces, commanded by General Davout. On 29 December 1798, the French army arrived at Girga, capital of Upper Egypt, and waited there for a flotilla to bring them ammunition. However, twenty days passed without hearing of the flotilla. In the meantime, Mourad Bey had contacted chieftains from Jeddah and Yanbu to cross the Red Sea and to exterminate a handful of infidels who have come to destroy the religion of Mohammed. He also sent emissaries to Nubia to bring reinforcements, and Hassan Bey Jeddaoui who also conjured to join against the enemies of the Quran. Upon hearing these endeavours, General Davout mobilized his forces on 2–3 January 1799, where he met a multitude of armed men near the village of Sawaqui. The insurgents were easily routed, and eight hundred of them remained on the battlefield. However, the locals kept gathering around Asyut to combat the French. On 8 January, Davout met another local forces at Tahta, where he killed a thousand men and put the rest to flight. In the meantime, Mourad Bey's army was reinforced by a thousand sheriffs arriving from beyond the Red Sea, two hundred and fifty Mamluks led Hassan bey Jeddaoui and Osman bey Hassan, in addition to Nubians and North Africans led by Sheikh Al-Kilani, where they encamped near the village of Houé, all supported by the inhabitants of Upper Egypt and the Cataracts of the Nile. The combined Muslim army marched on 21 January 1799 in the desert, until they reached Samhud near Qena. On 22 January, Desaix formed three squares, two infantry and one cavalry. The latter was placed in the centre of the other two, in order to be protected. The French were scarcely drawn up in line, as the enemy cavalry completely surrounded them, while a column of Arabs from Yanbu fired continuously on their left. Desaix instructed the riflemen of the 96th Infantry Regiment to attack them, while Rapp and Savary, at the head of a squadron of cavalry, would charge the enemy in flank. The Arabs were attacked so vividly which forced them to flee, leaving about thirty of their own in the square, both killed and wounded. Afterwards, the Arabs of Yanbu, having rallied, came back to attack, and wanted to capture the village of Samhud, but the riflemen of the 96th Infantry Regiment assaulted them viciously and directed against them such a sustained fire, in which they were obliged to withdraw, after having lost many people. However, the numerous Muslim forces were advancing, uttering frightful cries, and the Mamluks swooped down on the squares commanded by the generals, Friant and Belliard, but they were so strongly repulsed by artillery and musketry fire that they had to withdraw, leaving the battlefield strewn with their dead. Mourad Bey and Osman bey Hassan, who commanded the Mamluk corps, could not stand against the charge of Davout's cavalry. They abandoned their positions, and dragged the whole army in their flight. The French pursued their enemies until the next day, and did not stop until after having pushed them beyond the Cataracts of the Nile. Desaix continued to march south, as he reached Esneh on 9 February. Meanwhile, Osman bey Hassan had stationed his forces at the foot of a mountain near Aswan. On 12 February, General Davout discovered the enemy positions and immediately made his military arrangements. He formed his cavalry in two lines, and, in this order of battle, he swooped down on the Mamluks. Osman bey Hassan was dangerously wounded, as he saw his horse killed under him. The French cavalry rushed with such impetuosity on the Ottomans, and the fight turned into fury. However, the Mamluks were defeated and forced to abandon the battlefield. By the end of February 1799, Sherif Hassan and 2,000 Infantry arrived from Mecca. When Desaix and his forces reached Asyut, his flotilla was left behind near Qena. On 3 March, Ottomans launched an attack on the flotilla which was called "L' Italie" led by Captain Morandi with 200 marines and 300 wounded and blind on board. Morandi tried to manoeuvre but the vessel was boarded by hundreds of invaders, in which he ordered that the vessel to be set on fire. He was later killed by rain of hostile bullets. However, all on board were eventually mutilated and killed. On 8 March 1799, General Belliard led his forces to fight 3,000 Meccan Infantry and 350 Mamluks in the plain of Abnud, located on the right bank of the Nile to the south of Qena. The French with their square formation managed to advance on the Ottoman forces who later garrisoned themselves in the houses of Abnud. The fighting lasted for hours, afterward, the French managed to reach the courtyard of the village and set the houses on fire. The Ottomans were forced to escape and the remaining injured were all killed. The Mamluks maintained with their strategy of inciting the locals against the French forces. On 1 May 1799, General Davout's forces killed at least 2,000 fellahin at Beni Adi near Asyut. However, as they were pursuing Murad Bey into Upper Egypt, the French discovered the monuments at Dendera, Thebes, Edfu and Philae. On 29 May 1799, General Belliard managed to capture Kosseir on the Red Sea, after he marched through the desert, to halt further incoming of Meccan troops or any possible invasion from the English. Abukir to withdrawal At Cairo the army found the rest and supplies it needed to recover, but its stay there could not be a long one. Bonaparte had been informed that Murad Bey had evaded the pursuit by Generals Desaix, Belliard, Donzelot and Davout and was descending on Lower Egypt. Bonaparte thus marched to attack him at Giza, also learning that 100 Ottoman ships were off Aboukir, threatening Alexandria. Without losing time or returning to Cairo, Bonaparte ordered his generals to make all speed to meet the army commanded by the pasha of Rumelia, Saïd-Mustapha, which had joined up with the forces under Murad Bey and Ibrahim. Before leaving Giza, where he found them, Bonaparte wrote to Cairo's divan, stating: Eighty ships have dared to attack Alexandria but, beaten back by the artillery in that place, they have gone to anchor in Aboukir Bay, where they began disembarking [troops]. I leave them to do this, since my intention is to attack them, to kill all those who do not wish to surrender, and to leave others alive to be led in triumph to Cairo. This will be a handsome spectacle for the city. The Ottoman troops, led by Mustafa Pasha, entrenched themselves in a heavily fortified position near the coast at Abukir. On 25 June 1799, Napoleon attacked the Ottoman forces, which consisted of around 18,000 soldiers, with around 10,000 French troops. The French infantry fought their way through three successive lines of Turkish entrenchments, greatly assisted by the stupidity of the Janissaries who repeatedly left their positions in search of French heads, but the coup de grace was delivered by Murat at the head of his cavalry shortly after midday. The dashing Gascon found the knowledge of the ground acquired the previous July of the greatest assistance, and the Turks were swept back by the ferocity of his charge; after a fierce struggle, in which Murat personally engaged the enemy general and was wounded in the cheek, the Turkish headquarters was captured together with many senior enemy officers. The land battle at Abukir was Bonaparte's last action in Egypt, partly restoring his reputation after the French naval defeat at the same place a year earlier. During the prisoner exchange at Aboukir and notably via the Gazette de Francfort Sidney Smith had sent him, he was in communication with the British fleet, from which he had learned of events in France. As Bonaparte saw (and later mythologised) France was thrown back into retreat, its enemies had recaptured France's conquests, France was unhappy at its dictatorial government and was nostalgic for the glorious peace it had signed in the Treaty of Campo Formio – as Bonaparte saw it, this meant France needed him and would welcome him back. With the Egyptian campaign stagnating and political instability developing back home, a new phase in Bonaparte's career was beginning – he felt that he had nothing left to do in Egypt which was worthy of his ambition and that (as had been shown by the defeat at Acre) the forces he had left to him there were not sufficient for an expedition of any importance outside of Egypt. Bonaparte thus spontaneously decided to return to France. He only shared the secret of his return with a small number of friends whose discretion and loyalty were well known. He left Cairo in August on the pretext of a voyage in the Nile Delta without arousing suspicion, accompanied by the scholars Monge and Berthollet, the painter Denon, and generals Berthier, Murat, Lannes and a handful of other officers including Marmont, Andréossy and Bessières. On 23 August, a proclamation informed the army that Bonaparte had transferred his powers as commander in chief to General Kléber. Kleber read to his troops the concise communiqué Napoleon had left: 'Only extraordinary circumstances have persuaded me, for the benefit of my country and its reputation and in obedience, to pass through the enemy lines and return to Europe. The troops Bonaparte left behind were supposed to be honourably evacuated under the terms of the Convention of El Arish Kléber had negotiated with Smith and the Ottoman commander Kör Yusuf in early 1800, but Britain refused to sign and Kör Yusuf sent an amphibious assault force of 30,000 Mamlukes against Kléber. Kléber defeated the Mamlukes at the battle of Heliopolis in March 1800, and then suppressed an insurrection in Cairo. On 14 June (26 prairial), a Syrian student called Suleiman al-Halabi assassinated Kléber with a dagger in the heart, chest, left forearm and right thigh. Command of the French army passed to General Menou, who held command from 3 July until August 1801. Menou's letter was published in Le Moniteur on 6 September, with the conclusions of the committee charged with judging those responsible for the assassination: The committee, after carrying through the trial with all due solemnity and process, thought it necessary to follow Egyptian customs in its application of punishment; it condemned the assassin to be impaled after having his right hand burned; and three of the guilty sheikhs to be beheaded and their bodies burned. The Anglo-Ottomans then commenced their land offensive, the French were defeated by the British in the Battle of Alexandria on March 21, surrendered at Fort Julien in April and then Cairo fell in June. Finally besieged in Alexandria from 17 August – 2 September, Menou eventually capitulated to the British. Under the terms of his capitulation, the British General John Hely-Hutchinson allowed the French army to be repatriated in British ships. Menou also signed over to Britain all Egyptian antiquities, such as the Rosetta Stone, which the French had collected. After initial talks in Al Arish on 30 January 1802, the Treaty of Paris on 25 June ended all hostilities between France and the Ottoman Empire, returning Egypt to the Ottomans. Scientific expedition An unusual aspect of the Egyptian expedition was the inclusion of an enormous contingent of scientists and scholars ("savants") assigned to the invading French force, 167 in total. This deployment of intellectual resources is considered as an indication of Napoleon's devotion to the principles of the Enlightenment, and by others as a masterstroke of propaganda obfuscating the true motives of the invasion: the increase of Bonaparte's power. These scholars included engineers and artists, members of the Commission des Sciences et des Arts, the geologist Dolomieu, Henri-Joseph Redouté, the mathematician Gaspard Monge (a founding member of the École polytechnique), the chemist Claude Louis Berthollet, Vivant Denon, the mathematician Jean-Joseph Fourier (who did some of the empirical work upon which his "analytical theory of heat" was founded in Egypt), the physicist Étienne Malus, the naturalist Étienne Geoffroy Saint-Hilaire, the botanist Alire Raffeneau-Delile, and the engineer Nicolas-Jacques Conté of the Conservatoire national des arts et métiers. Their original aim was to help the army, notably by opening a Suez Canal, mapping out roads and building mills to supply food. They founded the Institut d'Égypte with the aim of propagating Enlightenment values in Egypt through interdisciplinary work, including improving its agricultural and architectural techniques. A scientific review was created under the title Décade égyptienne and in the course of the expedition the scholars also observed and drew the flora and fauna in Egypt and became interested in the country's resources. The Egyptian Institute saw the construction of laboratories, libraries, and a printing press. The group worked prodigiously, and some of their discoveries were not finally cataloged until the 1820s. A young engineering officer, Pierre-François Bouchard, discovered the Rosetta Stone in July 1799. Many of the antiquities discovered by the French in Egypt, including the stone, were signed over to the British at the end of the campaign by Menou as part of his treaty with Hutchinson. The French scholars' research in Egypt gave rise to the 4-volume Mémoires sur l'Égypte (published from 1798 to 1801). A subsequent and more comprehensive text was Description de l'Égypte, published on Napoleon's orders between 1809 and 1821. Publications such as these of Napoleon's discoveries in Egypt gave rise to fascination with Ancient Egyptian culture and the birth of Egyptology in Europe. The scientists also tested methods in hot air ballooning while in Egypt. Several months after the revolt of Cairo in 1798, inventor Nicolas-Jacques Conté and mathematician Gaspard Monge built a hot air balloon from paper, coloured with the tricolour red, white and blue of the French Republic. They launched the balloon above Azbakiyya Square above a crowd of spectators, but the balloon soon fell to earth, causing panic among the spectators. The French had also planned to demonstrate hot air balloon flight during their celebrations of the anniversary of the founding of the French Republic in 1798, but the scientists had lost their equipment due to the Battle of the Nile. Printing press The printing press was first introduced to Egypt by Napoleon. He brought with his expedition a French, Arabic, and Greek printing press, which were far superior in speed, efficiency and quality to the nearest presses used in Istanbul. In the Middle East, Africa, India and even much of Eastern Europe and Russia, printing was a minor, specialised activity until the 18th century at least. From about 1720, the Mutaferrika Press in Istanbul produced substantial amounts of printing, some of which the Egyptian clerics were aware of at the time. Juan Cole reports that, "Bonaparte was a master of what we would now call spin, and his genius for it is demonstrated by reports in Arabic sources that several of his more outlandish allegations were actually taken seriously in the Egyptian countryside." Bonaparte's initial use of Arabic in his printed proclamations was rife with error. In addition to much of the awkwardly translated Arabic wording being unsound grammatically, often the proclamations were so poorly constructed that they were undecipherable. The French Orientalist Jean Michel de Venture de Paradis, plausibly with the help of Maltese assistants, was responsible for translating the first of Napoleon's French proclamations into Arabic. The Maltese language is distantly related to the Egyptian dialect; and classical Arabic differs greatly in grammar, vocabulary, and idiom. Venture de Paradis, who had lived in Tunis, understood Arabic grammar and vocabulary, but did not know how to use them idiomatically. The Sunni Muslim clerics of the Al-Azhar University in Cairo reacted incredulously to Napoleon's proclamations. Abd al-Rahman al-Jabarti, a Cairene cleric and historian, received the proclamations with a combination of amusement, bewilderment, and outrage. He berated the French's poor Arabic grammar and the infelicitous style of their proclamations. Over the course of Napoleon's invasion of Egypt, al-Jabarti wrote a wealth of material regarding the French and their occupation tactics. Among his observations, he rejected Napoleon's claim that the French were "muslims" (the wrong noun case was used in the Arabic proclamation, making it a lower case "m") and poorly understood the French concept of a republic and democracy – words which did not exist at the time in Arabic. Analysis In addition to its significance in the wider French Revolutionary Wars, the campaign had a powerful impact on the Ottoman Empire in general, and the Arab world in particular. The invasion demonstrated the military, technological, and organisational superiority of the Western European powers to the Middle East. This led to profound social changes in the region. The invasion introduced Western inventions, such as the printing press, and ideas, such as liberalism and incipient nationalism, to the Middle East, eventually leading to the establishment of Egyptian independence and modernization under Muhammad Ali Pasha in the first half of the 19th century and eventually the Nahda, or Arab Renaissance. To modernist historians, the French arrival marks the start of the modern Middle East. Napoleon's destruction of the conventional Mamluk soldiers at the Battle of the Pyramids served as a reminder for modernising Arab monarchs to implement wide-ranging military reforms. While the Egyptian Islamic scholar and historian Al-Jabarti was critical of Napoleon and the French, he preferred them over the Ottomans. To Jabarti, Napoleon was compassionate towards Muslims and poor folk and he safeguarded the lives of innocents and civilians. This was at odds with the "arrogance, cruelty and tyranny" of Ottoman rule, which he characterised as an un-Islamic system marked by corruption, backwardness and summary executions. Although they opposed the French Republic and the ideas of the French Revolution, both Jabarti and his disciple Hassan Al-Attar were astonished by French technological advancements and appreciated what they perceived as the fair nature of trials in the French judicial system. The campaign ultimately ended in failure, with 15,000 French troops killed in action and 15,000 by disease. Napoleon's reputation as a brilliant military commander remained intact and continued to increase, despite some of his failures during the campaign. This was due to his expert propaganda, such as his Courrier de l'Égypte, set up to propagandise the expeditionary force itself and support its morale. Such propaganda spread back to France, where news of defeats such as at sea in Aboukir Bay and on land in Syria were suppressed. Defeats could be blamed on the now-assassinated Kléber, leaving Napoleon free from blame and with a burnished reputation. This opened his way to power and he profited from his reputation by engineering his becoming First Consul in the coup d'état of 18 brumaire (November 1799). Imperialism The French invasion of Egypt is widely regarded in contemporary academic circles to be "the first act of modern European imperialism" and also criticised for its role in shaping the civilizing mission narrative of 19th century European colonial empires. According to Professor Edward W. Said, Napoleonic invasion led to the dominance of Orientalist narratives of the Muslim world: "with Napoleon's occupation of Egypt, processes were set in motion between East and West that still dominate our contemporary cultural and political perspectives. And the Napoleonic expedition, with its great collective monument of erudition, the Description de l'Égypte, provided a scene or setting for Orientalism.. Napoleon's invasion of Egypt in 1798 and his foray into Syria have had by far the greater consequence for the modern history of Orientalism." Mamelukes in French service Colonel Barthelemy Serra took the first steps towards creating a Mameluke Corps in France. On September 27, 1800, he wrote a letter from Cairo to the first consul, couched in an Oriental style. He regretted being very far away from Napoleon and offered his total devotion to the French nation and expressed the Mamelukes' wish to become the bodyguard to the first consul. They wished to serve him as living shields against those who would seek to harm him. The first consul became receptive of admitting a unit of carefully selected cavalrymen as his personal guard. He had an officer pay appropriate respects to the foreign troops and provided Napoleon himself with a full report to the number of refugees. French order of battle Timeline and battles See also References External links |
======================================== |
[SOURCE: https://github.com/solutions/executive-insights] | [TOKENS: 348] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Executive insights, curated just for you Industry insights Starting today, we’re charging fairly for Actions across the board which reduces the price of GitHub Hosted Runners and the price the average GitHub customer pays. Discover the Gartner® roadmap for achieving 25% to 30% productivity gains by applying AI across the entire software development lifecycle. Learn why Gartner positioned GitHub as a Leader for the second year in a row—highest and furthest in both Ability to Execute and Completeness of Vision. Buying AI tools without empowering people to use them is a fast track to failure. That’s where advocates come in. They’re the human bridge between strategy and execution. GitHub was named a Leader in the IDC MarketScape for AI Coding and Software Engineering Technologies. The strategies detailed here are the product of GitHub's internal AI for Everyone initiative, which guides our company's efforts to embed AI into the fabric of how we work. Customer stories 90% of Fortune 100 choose GitHub 433% ROI with GitHub Enterprise 77,000 organizations use GitHub Copilot 75% Reduced time spent managing tools. Our recent study with Accenture shows that AI-driven tools like GitHub Copilot, when integrated into daily workflows, can significantly boost productivity, job satisfaction, and overall code quality without adding complexity. Discover how to seamlessly integrate AI into your development processes with GitHub Copilot and see measurable impact across your organization. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://github.com/orgs/community/discussions] | [TOKENS: 149] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. GitHub Community community community Discussions Pinned Discussions Filter by label Sorry, something went wrong. Sorry, something went wrong. Sorry, something went wrong. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. No labels found. Sorry about that. Use alt + click/return to exclude labels. Categories There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. Discussions Footer |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Insectivore] | [TOKENS: 1063] |
Contents Insectivore An insectivore is a carnivorous animal or plant which eats insects. An alternative term is entomophage, which can also refer to the human practice of eating insects. The first vertebrate insectivores were amphibians. When they evolved 400 million years ago, the first amphibians were piscivores, with numerous sharp conical teeth, much like a modern crocodile. The same tooth arrangement is however also suited for eating animals with exoskeletons, thus the ability to eat insects can stem from piscivory. At one time, insectivorous mammals were scientifically classified in an order called Insectivora. This order is now abandoned, as not all insectivorous mammals are closely related. Most of the Insectivora taxa have been reclassified; those that have not yet been reclassified and found to be truly related to each other remain in the order Eulipotyphla. Although individually small, insects exist in enormous numbers. Insects make up a very large part of the animal biomass in almost all non-marine, non-polar environments. It has been estimated that the global insect biomass is in the region of 1012 kg (one billion tons) with an estimated population of 1018 (one billion billion, or quintillion) organisms.: 13 Many creatures depend on insects for their primary diet, and many that do not (and are thus not technically insectivores) nevertheless use insects as a protein supplement, particularly when they are breeding. Examples Examples of insectivores include different kinds of species of carp, opossum, frogs, lizards (e.g. chameleons, geckos), nightingales, swallows, echidnas, numbats, anteaters, armadillos, aardvarks, pangolins, aardwolfs, bats, and spiders. Even large mammals are recorded as eating insects; the sloth bear is perhaps the largest insectivore. Insects also can be insectivores; examples are dragonflies, hornets, ladybugs, robber flies, and praying mantises.: 31 Insectivory also features to various degrees amongst primates, such as marmosets, tamarins, tarsiers, galagos and aye-aye.: 56–57 There is some suggestion that the earliest primates were nocturnal, arboreal insectivores. Insectivorous plants Insectivorous plants are plants that derive some of their nutrients from trapping and consuming animals or protozoan. The benefit they derive from their catch varies considerably; in some species, it might include a small part of their nutrient intake and in others it might be an indispensable source of nutrients. As a rule, however, such animal food, however valuable it might be as a source of certain critically important minerals, is not the plants' major source of energy, which they generally derive mainly from photosynthesis.: 14 Insectivorous plants might consume insects and other animal material trapped adventitiously. However, most species to which such food represents an important part of their intake are specifically, often spectacularly, adapted to attract and secure adequate supplies. Their prey animals typically, but not exclusively, comprise insects and other arthropods. Plants highly adapted to reliance on animal food use a variety of mechanisms to secure their prey, such as pitfalls, sticky surfaces, hair-trigger snaps, bladder-traps, entangling furriness, and lobster-pot trap mechanisms.: 14–17 Also known as carnivorous plants, they appear adapted to grow in places where the soil is thin or poor in nutrients, especially nitrogen, such as acidic bogs and rock outcroppings.: 13 Insectivorous plants include the Venus flytrap, several types of pitcher plants, butterworts, sundews, bladderworts, the waterwheel plant, brocchinia and many members of the Bromeliaceae. The list is far from complete, and some plants, such as Roridula species, exploit the prey organisms mainly in a mutualistic relationship with other creatures, such as resident organisms that contribute to the digestion of prey. In particular, animal prey organisms supply carnivorous plants with nitrogen, but they also are important sources of various other soluble minerals, such as potassium and trace elements that are in short supply in environments where the plants flourish. This gives them a decisive advantage over other plants, whereas in nutrient-rich soils they tend to be out-competed by plants adapted to aggressive growth where nutrient supplies are not the major constraints. Technically these plants are not strictly insectivorous, as they consume any animal that they can secure and consume; the distinction is trivial, however, because not many primarily insectivorous organisms exclusively consume insects. Most of those that do have such a restrictive diet, such as certain parasitoids and hunting wasps, are specialized to exploit particular species, not insects in general. Indeed, much as large mantids and spiders will do, the larger varieties of pitcher plants have been known to consume vertebrates such as small rodents and lizards.: 13 Charles Darwin wrote the first well-known treatise on carnivorous plants in 1875. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Phil_Harrison] | [TOKENS: 750] |
Contents Phil Harrison Phil Harrison is a British video gaming and corporate executive. He was a member of the original PlayStation team at Sony Computer Entertainment before and after its launch, and would hold positions at its European, American and global divisions until 2008. In 2012, he joined Microsoft and served as European corporate vice president of Xbox until 2015. Harrison joined Google in 2018, leading its Stadia gaming division; he left the company following the discontinuation of the service in 2023. Background Harrison became interested at a young age when gifted a Commodore 64 computer. His first work in computers was a side job at the age of 14, doing a graphical work on a British computer called Oric-1. Career From 1989 to 1992, Harrison served as head of development for Mindscape International, and prior to that as a game designer and graphic artist in the UK. After joining Sony in 1992, Harrison held executive management positions in Europe and North America – where he served as vice president, 3rd Party Relations and Research and Development for Sony Computer Entertainment America from 1996 to 2000. He was a core member of the teams that successfully launched the first three PlayStation consoles and software, that have helped expand the market for computer entertainment worldwide. A 1995 article in Next Generation called Harrison "Sony Computer Entertainment's European PlayStation primary evangelist." At E3 in May 2005, he showcased the first public real-time demonstrations of PlayStation 3 development hardware. In September 2005, Sony Computer Entertainment unified its regional product development operations under a global structure, Sony Computer Entertainment Worldwide Studios (SCE WWS), and appointed Harrison to serve as President of the new organization. Working closely with Sony's studios in Japan, Europe and North America, Harrison was responsible for setting the global product strategy and managing development operations of 13 studios in Japan, UK (including Evolution Studios), the Netherlands (Guerrilla Games) and the USA. On 25 February 2008, Sony announced Harrison's resignation from the company effective 29 February. On 3 March 2008, Infogrames Entertainment SA announced Phil Harrison as their new President and Directeur Général Délégué. Later that year he gave interviews in which he predicted that single-player games were to become increasingly rare as consumers wanted "network connectivity" and "community". On 29 May 2009, it was announced that Harrison had become the non-executive director of Atari, following the company's full takeover of US-based Atari, Inc., and the renaming of Infogrames Entertainment SA to Atari. On 19 April 2010, Atari announced Phil Harrison had resigned from the company's Board of Directors. On 17 May 2010, it was announced that Phil Harrison has joined the advisory board at David Perry's cloud gaming service known as Gaikai. On 13 March 2012, it was announced that Phil Harrison had joined the Interactive Entertainment Team at Microsoft. He headed the European operations for Xbox. On 17 April 2015, it was announced that Phil Harrison had left Microsoft Game Studios. On 22 January 2018, it was announced that Phil Harrison had joined Google as a vice president and general manager of the unit that would develop the cloud gaming platform Google Stadia, introduced in 2019. On 1 February 2021, Phil Harrison announced that Google would shut down its internal game development studio. On 29 September 2022, Phil Harrison announced that Google would shut down Stadia entirely by 18 January 2023. Harrison reportedly left Google that January, around the time of Stadia's shutdown. References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.