text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/William_Shockley] | [TOKENS: 4355] |
Contents William Shockley William Bradford Shockley (February 13, 1910 – August 12, 1989) was an American solid-state physicist. He was the manager of a research group at Bell Labs that included John Bardeen and Walter Brattain. The three scientists were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect." Partly as a result of Shockley's attempts to commercialize a new transistor design in the 1950s and 1960s, California's Silicon Valley became a hotbed of electronics innovation. He recruited brilliant employees, but quickly alienated them with his autocratic and erratic management; they left and founded major companies in the industry. In his later life, while he was a professor of electrical engineering at Stanford University and afterward, Shockley became known as a racist and a eugenicist. Early life and education William Bradford Shockley was born on February 13, 1910, in London to American parents, and was raised in the family's hometown of Palo Alto, California, from the age of 3. His father, William Hillman Shockley, was a mining engineer who speculated in mines for a living and spoke eight languages. His mother, May Bradford, grew up in the American West, graduated from Stanford University, and became the first female U.S. deputy mining surveyor. Shockley was homeschooled up to the age of eight, due to his parents' dislike of public schools as well as Shockley's habit of violent tantrums. Shockley learned a little physics at a young age from a neighbor who was a Stanford physics professor. Shockley spent two years at Palo Alto Military Academy, then briefly enrolled in the Los Angeles Coaching School to study physics and later graduated from Hollywood High School in 1927. Shockley obtained a B.S. from Caltech in 1932 and a Ph.D. from MIT in 1936. The title of his doctoral thesis was Electronic Bands in Sodium Chloride, a topic suggested by his thesis advisor, John C. Slater. Career and research Shockley was one of the first recruits to Bell Telephone Laboratories by Mervin Kelly, who became director of research at the company in 1936 and focused on hiring solid-state physicists. Shockley joined a group headed by Clinton Davisson in Murray Hill, New Jersey. Executives at Bell Labs had theorized that semiconductors may offer solid-state alternatives to the vacuum tubes used throughout Bell's nationwide telephone system. Shockley conceived a number of designs based on copper-oxide semiconductor materials, and with Walter Brattain's unsuccessful attempt to create a prototype in 1939. Shockley published a number of fundamental papers on solid-state physics in Physical Review. In 1938, he received his first patent, "Electron Discharge Device", on electron multipliers. When World War II broke out, Shockley's prior research was interrupted and he became involved in radar research in Manhattan (New York City). Also at Bell, early in 1942 Shockley did the first known pioneering applied work on Delay-line memory, which was 1/100th the cost of competing electronic memory of vacuum tube technology and approximately as rapid. This technology was incorporated inside the ENIAC computer by 1945. In May 1942, he took leave from Bell Labs to become a research director at Columbia University's Anti-Submarine Warfare Operations Group. This involved devising methods for countering the tactics of submarines with improved convoying techniques, optimizing depth charge patterns, and so on. Shockley traveled frequently to the Pentagon and Washington to meet high-ranking officers and government officials. In 1944, he organized a training program for B-29 bomber pilots to use new radar bomb sights. In late 1944, he took a three-month tour to bases around the world to assess the results. For this project, Secretary of War Robert Patterson awarded Shockley the Medal for Merit on October 17, 1946. In July 1945, the War Department asked Shockley to prepare a report on the question of probable casualties from an invasion of the Japanese mainland. Shockley concluded: If the study shows that the behavior of nations in all historical cases comparable to Japan's has in fact been invariably consistent with the behavior of the troops in battle, then it means that the Japanese dead and ineffectives at the time of the defeat will exceed the corresponding number for the Germans. In other words, we shall probably have to kill at least 5 to 10 million Japanese. This might cost us between 1.7 and 4 million casualties including 400,000 to 800,000 killed. This report influenced the decision of the United States to drop atomic bombs on Hiroshima and Nagasaki, which preceded the surrender of Japan. Shockley was the first physicist to propose a log-normal distribution to model the creation process for scientific research papers. Shortly after the war ended in 1945, Bell Labs formed a solid-state physics group, led by Shockley and chemist Stanley Morgan, which included John Bardeen, Walter Brattain, physicist Gerald Pearson, chemist Robert Gibney, electronics expert Hilbert Moore, and several technicians. Their assignment was to seek a solid-state alternative to fragile glass vacuum tube amplifiers. First attempts were based on Shockley's ideas about using an external electrical field on a semiconductor to affect its conductivity. These experiments failed every time in all sorts of configurations and materials. The group was at a standstill until Bardeen suggested a theory that invoked surface states that prevented the field from penetrating the semiconductor. The group changed its focus to study these surface states and they met almost daily to discuss the work. The group had excellent rapport and freely exchanged ideas. By the winter of 1946, they had enough results that Bardeen submitted a paper on the surface states to Physical Review. Brattain started experiments to study the surface states through observations made while shining a bright light on the semiconductor's surface. This led to several more papers (one of them co-authored with Shockley), which estimated the density of the surface states to be more than enough to account for their failed experiments. The pace of the work picked up significantly when they started to surround point contacts between the semiconductor and the conducting wires with electrolytes. Moore built a circuit that allowed them to vary the frequency of the input signal easily. Finally they began to get some evidence of power amplification when Pearson, acting on a suggestion by Shockley, put a voltage on a droplet of glycol borate placed across a p–n junction. Bell Labs' attorneys soon discovered Shockley's field effect principle had been anticipated and devices based on it patented in 1930 by Julius Lilienfeld, who filed his MESFET-like patent in Canada on October 22, 1925. Although the patent appeared "breakable" (it could not work) the patent attorneys based one of its four patent applications only on the Bardeen-Brattain point contact design. Three others (submitted first) covered the electrolyte-based transistors with Bardeen, Gibney and Brattain as the inventors.[citation needed] Shockley's name was not on any of these patent applications. This angered Shockley, who thought his name should also be on the patents because the work was based on his field effect idea. He even made efforts to have the patent written only in his name, and told Bardeen and Brattain of his intentions. Shockley, angered by not being included on the patent applications, secretly continued his own work to build a different sort of transistor based on junctions instead of point contacts; he expected this kind of design would be more likely to be commercially viable. The point contact transistor, he believed, would prove to be fragile and difficult to manufacture. Shockley was also dissatisfied with certain parts of the explanation for how the point contact transistor worked and conceived of the possibility of minority carrier injection. On February 13, 1948, another team member, John N. Shive, built a point contact transistor with bronze contacts on the front and back of a thin wedge of germanium, proving that holes could diffuse through bulk germanium and not just along the surface as previously thought.: 153 : 145 Shive's invention sparked Shockley's invention of the junction transistor.: 143 A few months later he invented an entirely new, considerably more robust, type of transistor with a layer or 'sandwich' structure. This structure went on to be used for the vast majority of all transistors into the 1960s, and evolved into the bipolar junction transistor. Shockley later described the workings of the team as a "mixture of cooperation and competition". He also said that he kept some of his own work secret until his "hand was forced" by Shive's 1948 advance. Shockley worked out a rather complete description of what he called the "sandwich" transistor, and a first proof of principle was obtained on April 7, 1949. Meanwhile, Shockley worked on his book, Electrons and Holes in Semiconductors, which was published as a 558-page treatise in 1950. The tome included Shockley's critical ideas of drift and diffusion and the differential equations that govern the flow of electrons in solid state crystals. Shockley's diode equation is also described. This seminal work became the reference text for other scientists working to develop and improve new variants of the transistor and other devices based on semiconductors. This resulted in his invention of the bipolar junction transistor, which was announced at a press conference on July 4, 1951. The ensuing publicity generated by the "invention of the transistor" often thrust Shockley to the fore, much to the chagrin of Bardeen and Brattain. Bell Labs management, however, consistently presented all three inventors as a team. Though Shockley would correct the record where reporters gave him sole credit for the invention, he eventually infuriated and alienated Bardeen and Brattain, and he essentially blocked the two from working on the junction transistor. Bardeen began pursuing a theory for superconductivity and left Bell Labs in 1951. Brattain refused to work with Shockley further and was assigned to another group. Neither Bardeen nor Brattain had much to do with the development of the transistor beyond the first year after its invention. Shockley left Bell Labs around 1953 and took a job at Caltech. In 1956, Shockley started Shockley Semiconductor Laboratory in Mountain View, California, which was close to his elderly mother in Palo Alto, California. The company, a division of Beckman Instruments, Inc., was the first establishment working on silicon semiconductor devices in what came to be known as Silicon Valley. Shockley recruited brilliant employees to his company, but alienated them by undermining them relentlessly. "He may have been the worst manager in the history of electronics", according to his biographer Joel Shurkin. Shockley was autocratic, domineering, erratic, hard-to-please, and increasingly paranoid. In one well-known incident, he demanded lie detector tests to find the "culprit" after a company secretary suffered a minor cut. In late 1957, eight of Shockley's best researchers, who would come to be known as the "traitorous eight", resigned after Shockley decided not to continue research into silicon-based semiconductors. They went on to form Fairchild Semiconductor, a loss from which Shockley Semiconductor never recovered;[citation needed] it was purchased by Clevite in 1960, then sold to ITT in 1968, and shortly after, officially closed [citation needed] Over the course of the next 20 years, more than 65 new enterprises would end up having employee connections back to Fairchild. A group of about thirty colleagues have met on and off since 1956 to reminisce about their time with Shockley, "the man who brought silicon to Silicon Valley", as the group's organizer said in 2002. Racist and eugenicist views After Shockley left his role as Director of Shockley Semiconductor in April 1960, he joined Stanford University, where he was appointed Alexander M. Poniatoff Professor of Engineering and Applied Science in 1963, a position he held until his retirement in 1975. In the last two decades of his life, Shockley, who had no degree in genetics, became widely known for his extreme views on race and human intelligence, and his advocacy of eugenics. Bo Lojek, a biographer and former colleague of Shockley, claims that "two major events triggered Shockley’s interest in heredity and intelligence." The first being a 1963 news story in which Rudy Hoskins, a 17-year-old black boy, was hired to throw "blinding acid from a baby bottle" into the eyes of a local white shopkeeper. Shortly after, an educational testing research director mailed Shockley a J. P. Guilford article which proposed a new operational model for intelligence. As described by his Los Angeles Times obituary, "He went from being a physicist with impeccable academic credentials to amateur geneticist, becoming a lightning rod whose views sparked campus demonstrations and a cascade of calumny." He thought his work was important to the future of humanity and he also described it as the most important aspect of his career. He argued that a higher rate of reproduction among purportedly less intelligent people was having a dysgenic effect, and argued that a drop in average intelligence would lead to a decline in civilization. He also claimed that black people were genetically and intellectually inferior to white people. Shockley's biographer Joel Shurkin notes that for much of Shockley's life in the racially segregated United States of the time, he had almost no contact with black people. In a debate with psychiatrist Frances Cress Welsing and on Firing Line with William F. Buckley Jr., Shockley argued, "My research leads me inescapably to the opinion that the major cause of the American Negro's intellectual and social deficits is hereditary and racially genetic in origin and, thus, not remediable to a major degree by practical improvements in the environment." Shockley was one of the race theorists who received money from the Pioneer Fund, and at least one donation to him came from its founder, the eugenicist Wickliffe Draper. Shockley proposed that individuals with IQs below 100 should be paid to undergo voluntary sterilization, $1,000 for each of their IQ points under 100. This proposal led to the University of Leeds to withdraw its offer of an honorary degree to him. Anthropologist and far-right activist Roger Pearson defended Shockley in a self-published book co-authored with Shockley. In 1973, University of Wisconsin–Milwaukee professor Edgar G. Epps argued that "William Shockley's position lends itself to racist interpretations". The Southern Poverty Law Center describes Shockley as a white nationalist who failed to produce evidence for his eugenic theories amidst "near-universal acknowledgement that his work was that of a racist crank". The science writer Angela Saini describes Shockley as having been "a notorious racist." Shockley insisted that he was not a racist. He wrote that his findings do not support white supremacy, instead claiming that East Asians and Jews fare better than whites intellectually. In 1973, Edgar Epps wrote that "I am pleased that Professor Shockley is not an Aryan supremacist, but I would remind him that a theory espousing hereditary superiority of Orientals or Jews is just as racist in nature as the Aryan supremacy doctrine". Shockley's advocacy of eugenics triggered protests. In one incident, the science society Sigma Xi, fearing violence, canceled a 1968 convocation in Brooklyn where Shockley was scheduled to speak. In Atlanta in 1981, Shockley filed a libel suit against the Atlanta Constitution after a science writer, Roger Witherspoon, compared Shockley's advocacy of a voluntary sterilization program to Nazi human experimentation. The suit took three years to go to trial. Shockley won the suit but he only received one dollar in damages and he did not receive any punitive damages. Shockley's biographer Joel Shurkin, a science writer on the staff of Stanford University during those years, sums this statement up by saying that it was defamatory, but Shockley's reputation was not worth much by the time the trial reached a verdict. Shockley taped his telephone conversations with reporters, transcribed them, and sent the transcripts to the reporters by registered mail. At one point, he toyed with the idea of making the reporters take a simple quiz on his work before he would discuss the subject matter of it with them. His habit of saving all of his papers (including laundry lists) provides abundant documentation on his life for researchers. Shockley was a candidate for the Republican nomination in the 1982 United States Senate election in California. He ran on a single-issue platform of opposing the "dysgenic threat" that he alleged African-Americans and other groups posed. He came in eighth place in the primary, receiving 8,308 votes and 0.37% of the vote. According to Shurkin, by this time, "His racism destroyed his credibility. Almost no one wanted to be associated with him, and many of those who were willing did him more harm than good". The Foundation for Research and Education on Eugenics and Dysgenics (FREED) was a non-profit organization founded in March 1970 in the United States formed to support the research of Shockley, who was the president of the foundation and R. Travis Osborne, a member. The foundation released the newsletter "FREED" and research papers at Stanford University. The organization was founded according to its mission "solely for scientific and educational purposes related to human population and quality problems". From 1969 to 1976, the Pioneer Fund allocated about $2.5 million (adjusted-for-inflation in 2023) to support Shockley's endeavors. This funding was distributed through grants to Stanford University for the exploration of "research into the factors which affect genetic potential" and also directly to FREED. Via FREED, Shockley promoted his concept of a "Voluntary Sterilization Bonus Plan", proposing to compensate economically disadvantaged women for undergoing sterilization procedures. In 1970, Shockley listed former senator of Alaska Ernest Gruening as a director of FREED. Personal life At age 23 and while still a student, Shockley married Jean Bailey in August 1933. The couple had two sons and a daughter. Shockley separated from her in 1953. He married Emily Lanning, a psychiatric nurse, in 1955; she helped him with some of his theories. Although one of his sons earned a PhD at Stanford University and his daughter graduated from Radcliffe College, Shockley believed his children "represent a very significant regression ... my first wife – their mother – had not as high an academic-achievement standing as I had". Shockley was an accomplished rock climber, going often to the Shawangunks in the Hudson River Valley. His 1953 route known as "Shockley's Ceiling", is one of the classic climbing routes in the area. Mountain Project, a web-based climbing guidebook, reports that the route's name has been changed to "The Ceiling" due to Shockley's eugenics controversies. A 1996 guidebook notes that the original party on this route avoided the ceiling in question. The guidebook lists this variation as "Shockley's Without." He was popular as a speaker, lecturer, and amateur magician. He once "magically" produced a bouquet of roses at the end of his address before the American Physical Society. Shockley was also known in his early years for elaborate practical jokes. He had a longtime hobby of raising ant colonies. Shockley donated sperm to the Repository for Germinal Choice, a sperm bank founded by Robert Klark Graham in hopes of spreading humanity's best genes. The bank, called by the media the "Nobel Prize sperm bank", claimed to have three Nobel Prize-winning donors, though Shockley was the only one to publicly acknowledge his involvement. However, Shockley's controversial views brought the Repository for Germinal Choice a degree of notoriety and may have discouraged other Nobel Prize winners from donating sperm. Shockley was unhappy in his life and was often psychologically and sometimes physically abusive toward his sons. On one occasion, he reportedly played Russian roulette on himself as part of a suicide attempt. Shockley died of prostate cancer on August 12, 1989, in Stanford, California, at the age of 79. At the time of his death, he was estranged from most of his friends and family, except his second wife, the former Emmy Lanning (1913–2007). His children reportedly learned of his death by reading his obituary in the newspaper.[better source needed] He is buried in Alta Mesa Memorial Park in Palo Alto, California. Recognition Patents Shockley was granted over 90 US patents. Some notable ones are: Publications Notes References Further reading See also External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-328] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-131] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Ergosphere] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dor_Guez] | [TOKENS: 1024] |
Contents Dor Guez Dor Guez Munayer (Hebrew: דור גז מונייר) is a Jerusalemite artist of Christian Palestinian and Tunisian Jewish origin, founder of The Christian Palestinian Archive, and the director of SeaPort Residency. Biography Dor Guez Munayer was born in Baka, Jerusalem. On his father's side he is the grandson of a Holocaust survivor, and on his mother's side, the son of a Palestinian-Christian family from Lod, Munayer, who were among the 2% of the city population that remained in Lydda after the 1948 Arab–Israeli War, when the State of Israel was established. Artistic career Guez's photography, video, mixed media, and essays explore the relationship between art, narrative, and memory. In his examination of personal and official accounts of the past, Guez raises questions about the role of art in narrating unwritten histories and re-contextualizing visual and written documents. Since 2006, his research has focused on archival materials of the region and his biracial background. In 2006, Guez began working on his Palestinian archival project. Using photographs from the first half of the 20th century, the images also depict Guez's family from Jaffa and Lydda. After completing his studies, he exhibited solo exhibitions at the Petah Tikva Museum of Art (2009) and at the Tel Aviv Museum of Art (2011). The two exhibitions dealt with the ramifications of the 1948 war on the Palestinian minority in Israel. Guez serves as the head of the MA in Fine Art program at Bezalel Academy of Arts and Design. On the occasion of Guez's solo exhibition at The ICA London in collaboration with the A. M. Qattan Foundation, Guez was described as "a leading critical and artistic voice from the Middle East". Guez received his PhD in 2014. Guez's work has been displayed in over thirty solo exhibitions worldwide; MAN Museum, Nuoro (2018); DEPO, Istanbul (2017); the Museum for Islamic Art, Jerusalem (2017); the Museum of Contemporary Art, Detroit (2016); the Institute of Contemporary Arts, London (2015); the Center for Contemporary Art, Tel Aviv (2015); the Rose Art Museum, Brandeis University, Massachusetts (2013); Artpace, San Antonio (2013); the Mosaic Rooms, Centre for Contemporary Arab Culture & Art, London (2013); the KW Institute for Contemporary Art, Berlin (2010); and Petah Tikva Museum of Art, (2009). He has participated in group exhibitions at MODEM Museum (2018); The Arab World Institute (2017); the Buenos Aires Museum of Modern Art (2016); the North Coast Art Triennial, Denmark (2016); Weatherspoon Art Museum, Greensboro, North Carolina (2015); the 17th, 18th, and 19th International Contemporary Art Festival Videobrasil, São Paulo (2011, 2013, 2015); the 8th Berlin Biennial for Contemporary Art (2014); Cleveland Institute of Art (2014); Triennale Museum, Milan (2014); Centre of Contemporary Art, Torun (2014); Tokyo Metropolitan Museum of Photography (2014); MAXXI Museum, Rome (2013); Palais de Tokyo, Paris (2012); the 12th Istanbul Biennial (2011); and the Museum of Modern Art, Ljubljana (2010). Private life Dor Guez is openly gay. He is married to the American stylist Darnell Ross. The couple has a daughter and a son. Guez lives and works in Jaffa and New York City. Published works Public collections Guez's works are held in Tate Modern London, Center Pompidou Paris, Guggenheim Abu Dhabi,LACMA; Los Angeles County Museum of Art, Princeton University Art Museum, Tel Aviv Museum of Art, BNL collection Rome, The Jewish Museum New York, Rose Art Museum Boston, FRAC collection Marseille and Museum of Modern Art Bogota, Rose Art Museum (Boston), FRAC collection (Marseille), Israel Museum (Jerusalem), Schocken collection (Tel Aviv), BNL collection (Italy), Petah Tikva Museum of Art (Petah Tikva), Brandis University (Waltham), Recanati collection (New York), Beit Hatfutsot (Tel Aviv).[citation needed] He is represented by Goodman Gallery, London/Johannesburg/Cape Town/New York, Dvir Gallery, Paris/Brussels/Tel Aviv, and Carlier-Gebauer Gallery, Berlin/Madrid. Awards and recognition Guez is the recipient of the Ruth Ann and Nathan Perlmutter Artist in Residency Award, Rose Art Museum, Brandeis University; and International Artist in Residence Award, Artpace, San Antonio. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/MATH-MATIC] | [TOKENS: 378] |
Contents MATH-MATIC MATH-MATIC is the marketing name for the AT-3 (Algebraic Translator 3) compiler, an early programming language for the UNIVAC I and UNIVAC II. MATH-MATIC was written beginning around 1955 by a team led by Charles Katz under the direction of Grace Hopper. A preliminary manual was produced in 1957 and a final manual the following year. Syntactically, MATH-MATIC was similar to Univac's contemporaneous business-oriented language, FLOW-MATIC, differing in providing algebraic-style expressions and floating-point arithmetic, and arrays rather than record structures. Notable features Expressions in MATH-MATIC could contain numeric exponents, including decimals and fractions, by way of a custom typewriter. MATH-MATIC programs could include inline assembler sections of ARITH-MATIC code and UNIVAC machine code. The UNIVAC I had only 1000 words of memory, and the successor UNIVAC II as little as 2000. MATH-MATIC allowed for larger programs, automatically generating code to read overlay segments from UNISERVO tape as required. The compiler attempted to avoid splitting loops across segments. Influence In proposing the collaboration with the ACM that led to ALGOL 58, the Gesellschaft für Angewandte Mathematik und Mechanik wrote that it considered MATH-MATIC the closest available language to its own proposal. In contrast to Backus' FORTRAN, MATH-MATIC did not emphasise execution speed of compiled programs. The UNIVAC machines did not have floating-point hardware, and MATH-MATIC was translated via A-3 (ARITH-MATIC) pseudo-assembler code rather than directly to UNIVAC machine code, limiting its usefulness. Sample program A sample MATH-MATIC program: Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-62] | [TOKENS: 4993] |
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars_sample-return_mission] | [TOKENS: 3439] |
Contents Mars sample-return mission A Mars sample-return (MSR) mission is a proposed mission to collect rock and dust samples on Mars and return them to Earth. Such a mission would allow more extensive analysis than that allowed by onboard sensors. Risks of cross-contamination of the Earth biosphere from returned Martian samples have been raised, though the risk of this occurring is considered to be low. Some of the most recent concepts are a NASA-ESA proposal; a CNSA proposal, Tianwen-3 and a Roscosmos proposal, Mars-Grunt. In addition, a JAXA proposal, Martian Moons eXploration (MMX,) is aimed at returning samples from Phobos. Although NASA and ESA's plans to return the samples to Earth are still in the design stage as of 2024[update], samples have been gathered on Mars by the Perseverance rover. Scientific value Once returned to Earth, stored samples can be studied with the most sophisticated science instruments available. Thomas Zurbuchen, associate administrator for science at NASA Headquarters in Washington, expects such studies to allow several new discoveries at many fields. Samples may be reanalyzed in the future by instruments that do not yet exist. In 2006, the Mars Exploration Program Analysis Group identified 55 important investigations related to Mars exploration. In 2008, they concluded that about half of the investigations "could be addressed to one degree or another by MSR", making MSR "the single mission that would make the most progress towards the entire list" of investigations. Moreover, it was reported that a significant fraction of the investigations could not be meaningfully advanced without returned samples. One source of Mars samples is what are thought to be Martian meteorites, which are rocks ejected from Mars that made their way to Earth. As of August 2023[update], 356 meteorites had been identified as Martian, out of over 79,000 known meteorites. These meteorites are believed to be from Mars because their elemental and isotopic compositions are similar to rocks and atmospheric gases analyzed on Mars. History Returning from Mars appeared in technical literature when Apollo was still in development and the first spacecraft to fly past Mars had not yet launched, with an expectation that people would be on board for Mars ascent. The density of the Mars atmosphere remained unknown at that time, so the Lockheed engineering author reported the analysis of trajectory options over a range of aerodynamic drag conditions for a 15-ton launch vehicle to reach a rendezvous orbit. At NASA, returning samples from Mars was studied jointly by the Langley Research Center and the Jet Propulsion Laboratory in the early 1970s during the time that the Viking Mars lander mission was in development, and a Langley author noted that the "Mars surface-to-orbit launch vehicle" would need high performance because its mass would "have a substantial impact on the mass and systems requirements" for earlier mission phases, delivery of that vehicle to Mars and launch preparations on Mars. For at least three decades, scientists have advocated the return of geological samples from Mars. One early concept was the Sample Collection for Investigation of Mars (SCIM) proposal, which involved sending a spacecraft in a grazing pass through Mars's upper atmosphere to collect dust and air samples without landing or orbiting. The Soviet Union considered a Mars sample-return mission, Mars 5NM, in 1975 but it was cancelled due to the repeated failures of the N1 rocket that would have launched it. Another sample-return mission, Mars 5M (Mars-79), planned for 1979, was cancelled due to complexity and technical problems. In the mid-1980's, JPL mission planners noted that MSR had been "pushed by budgetary and other pressures into the '90s," and that the round trip would "impose large propulsion requirements." They presented a notional mass budget for a concept that would launch a 9.5-metric-ton payload from Earth, including a Mars orbiter for Earth return, and a lander having a 400-kg rover and a "Mars return vehicle" that would mass over 2 metric tons. A 20-kg sample canister would arrive at Earth containing 5 kg of samples including scientific-quality cores drilled from every type of Mars terrain. In the late 1980s, multiple NASA centers contributed to a proposed Mars Rover Sample Return mission (MRSR). As described by JPL authors, one option for MRSR relied on a single launch of a 12-ton package including a Mars orbiter and Earth return vehicle, a 700-kg rover, and a 2.7-ton Mars ascent vehicle (MAV) which would use pump-fed liquid propulsion for a significant mass saving. A 20-kg sample package on the MAV was to contain 5 kg of Mars soil. A Johnson Space Center author subsequently referred to a launch from Earth in 1998 with a MAV mass in the range 1400 to 1500 kg including a pump-fed first stage and a pressure-fed second stage. The United States' Mars Exploration Program, formed after Mars Observer's failure in September 1993, supported a Mars sample return. One architecture was proposed by Glenn J. MacPherson in the early 2000s. In 1996, the possibility of life on Mars was raised when apparent microfossils were thought to have been found in the Martian meteorite ALH84001. This hypothesis was eventually rejected, but led to a renewed interest in a Mars sample return. In the mid-1990s, NASA funded JPL and Lockheed Martin to study affordable small-scale MSR mission architectures including a concept to return 500 grams of Mars samples using a 100-kg MAV that would meet a small Mars orbiter for rendezvous and return to Earth. Robert Zubrin, a long-time advocate for human Mars missions, concluded in 1996 that the best approach to MSR would be launching directly to Earth using propellants made on Mars, because a rendezvous in Mars orbit would be too risky and he estimated that a direct-return MAV would mass 500 kg, too heavy to send to Mars affordably if fully fueled on Earth. International peer reviewers concurred. In 1997, a detailed analysis of conventional small-scale rocket technology (both solid and liquid propellant) found that known propulsion components would be too heavy to build a MAV as lightweight as several hundred kilograms and "The application of launch vehicle design principles to the development of new hardware on a tiny scale" was suggested. In 1998, JPL presented a design for a two-stage pressure-fed liquid bipropellant MAV that would be 600 kilograms or less at Mars liftoff, intended for a MSR mission in 2005. The same JPL author collaborated on a notional single-stage 200-kg MAV intended to be made small by using pump-fed propulsion to permit lightweight low-pressure liquid propellant tanks and compact high-pressure thrust chambers. This mass advantage of pump-fed operation was applied to a conceptual 100-kg MAV having a mass budget consistent with reaching Mars orbit using monopropellant, partly enabled by the simplicity of a single tank, also applicable to Mars landing typically done with monopropellant. The high-pressure thrusters and pump had previously been demonstrated in the 1994 flight of an experimental 21-kg rocket. As of late 1999, the MSR mission was anticipated to be launched from Earth in 2003 and 2005. Each was to deliver a rover and a Mars ascent vehicle, and a French supplied Mars orbiter with Earth return capability was to be included in 2005. The 140-kg MAV, "in the process of being contracted to industry" at that time, was to include telemetry on its first stage and thrusters that would spin the vehicle to 300 RPM before separation of the simplified lightweight upper stage. Atop each MAV, a 3.6-kg, 16-cm diameter spherical payload would contain 500 grams of samples and have solar cells to power a long-life beacon to facilitate rendezvous with the Earth return orbiter. The orbiter would capture the sample containers delivered by both MAVs and place them in separate Earth entry vehicles. This mission concept, considered by NASA's Mars Exploration Program to return samples by 2008, was cancelled following a program review. In mid-2006, the International Mars Architecture for the Return of Samples (iMARS) Working Group was chartered by the International Mars Exploration Working Group (IMEWG) to outline the scientific and engineering requirements of an internationally sponsored and executed Mars sample-return mission in the 2018–2023 time frame. In October 2009, NASA and ESA established the Mars Exploration Joint Initiative to proceed with the ExoMars program, whose ultimate aim is "the return of samples from Mars in the 2020s". ExoMars's first mission was planned to launch in 2018 with unspecified missions to return samples in the 2020–2022 time frame. The cancellation of the caching rover MAX-C in 2011, and later NASA withdrawal from ExoMars, due to budget limitations, ended the mission. The pull-out was described as "traumatic" for the science community. In early 2011, the US National Research Council's Planetary Science Decadal Survey, which laid out mission planning priorities for the period 2013–2022, declared an MSR campaign its highest priority Flagship Mission for that period. In particular, it endorsed the proposed Mars Astrobiology Explorer-Cacher (MAX-C) mission in a "descoped" (less ambitious) form. This mission plan was officially cancelled in April 2011. A key mission requirement for the Mars 2020 Perseverance rover mission was that it help prepare for MSR. The rover landed on 18 February 2021 in Jezero Crater to collect samples and store them in 43 cylindrical tubes for later retrieval. The Mars 2020 mission landed the Perseverance rover in the Jezero crater in February 2021. It has collected multiple samples and will continue to do so, packing them into cylinders for later return in the MSR Campaign. Jezero appears to be an ancient lakebed, suitable for ground sampling. It is also assigned the task to return the samples directly to the Sample Return lander, considering its potential mission longevity. In support of the NASA-ESA Mars Sample Return, rock, regolith (Martian soil), and atmosphere samples are being cached by Perseverance. As of July 2025,[update] 33 out of 43 sample tubes have been filled, including 8 igneous rock samples, 13 sedimentary rock sample tubes, 3 Igneous/Impactite rock sample tubes, a Serpentinite rock sample tube, a Silica-cemented carbonate rock sample tube, two regolith sample tubes, an atmosphere sample tube, and three witness tubes. Before launch, 5 of the 43 tubes were designated "witness tubes" and filled with materials that would capture particulates in the ambient environment of Mars. Out of 43 tubes, 3 witness sample tubes will not be returned to Earth and will remain on rover as the sample canister will only have 30 tube slots. Further, 10 of the 43 tubes are left as backups at the Three Forks Sample Depot. From 21 December 2022, Perseverance started a campaign to deposit 10 of its collected samples to the backup depot (Three Forks), to ensure that if Perseverance runs into problems, the MSR campaign could still succeed. Proposals The NASA-ESA plan is to return samples using three missions: a sample collection mission (Perseverance) launched in 2020 and currently operational, a sample retrieval mission (Sample Retrieval Lander + Mars ascent vehicle + Sample Transfer arm + 2 Ingenuity class helicopters), and a return mission (Earth Return Orbiter). Although NASA and ESA's proposal is still in the design stage, the first leg of gathering samples is currently being executed by the Perseverance rover on Mars and components of the sample retrieval lander (second leg) are in testing phase on earth. The later phases were facing significant cost overruns as of August 2023. In November 2023, NASA was reported to have cut back the program due to a possible shortage of funds. As of January 2024, the plan was facing ongoing scrutiny due to budget and scheduling considerations, and a new overhaul plan was being pursued. In April 2024, NASA reported that the originally projected cost of $7 billion and expected sample return of 2033 was updated to an unacceptable $11 billion and return of 2040 instead, prompting the agency to search for a better solution. China has announced plans for a Mars sample-return mission to be called Tianwen-3. The mission would launch in late-2028, with a lander and ascent vehicle on a Long March 5 and an orbiter and return module launched separately on a Long March 3B. Samples would be returned to Earth in July 2031. A previous plan would have used a large spacecraft that could carry out all mission phases, including sample collection, ascent, orbital rendezvous, and return flight. This would have required the super-heavy-lift Long March 9 launch vehicle. Another plan involved using Tianwen-1 to cache the samples for retrieval. France has worked towards a sample return for many years. This included concepts of an extraterrestrial sample curation facility for returned samples, and numerous proposals. They worked on the development of a Mars sample-return orbiter, which would capture and return the samples as part of a joint mission with other countries. On 9 June 2015, the Japanese Aerospace Exploration Agency (JAXA) unveiled a plan named Martian Moons Exploration (MMX) to retrieve samples from Phobos or Deimos. Phobos's orbit is closer to Mars and its surface may have captured particles blasted from Mars. The launch from Earth is planned for 2026, with a return to Earth in 2031. Japan has also shown interest in participating in an international Mars sample-return mission. A Russian Mars sample-return mission concept is Mars-Grunt. It adopted Fobos-Grunt design heritage. 2011 plans envisioned a two-stage architecture with an orbiter and a lander (but no roving capability), with samples gathered from around the lander by a robotic arm. Back contamination Whether life forms exist on Mars is unresolved. Thus, MSR could potentially transfer viable organisms to Earth, resulting in back contamination — the introduction of extraterrestrial organisms into Earth's biosphere. The scientific consensus is that the potential for large-scale effects, either through pathogenesis or ecological disruption, is small. Returned samples would be treated as potentially biohazardous until scientists decide the samples are safe. The goal is that the probability of release of a Mars particle is less than one in a million. The proposed NASA Mars sample-return mission will not be approved by NASA until the National Environmental Policy Act (NEPA) process has been completed. Furthermore, under the terms of Article VII of the Outer Space Treaty and other legal frameworks, were a release of organisms to occur, the releasing nation(s) would be liable for any resultant damages. The sample-return mission would be tasked with preventing contact between the Martian environment and the exterior of the sample containers. In order to eliminate the risk of parachute failure, the current plan is to use the thermal protection system to cushion the capsule upon impact (at terminal velocity). The sample container would be designed to withstand the force of impact. To receive the returned samples, NASA proposed a custom Biosafety Level 4 containment facility, the Mars Sample-Return Receiving facility (MSRRF). Other scientists and engineers, notably Robert Zubrin of the Mars Society, argued in the Journal of Cosmology that contamination risk is functionally zero leaving little need to worry. They cite, among other things, lack of any known incident although trillions of kilograms of material have been exchanged between Mars and Earth via meteorite impacts. The International Committee Against Mars Sample Return (ICAMSR) is an advocacy group led by Barry DiGregorio, that campaigns against a Mars sample-return mission. While ICAMSR acknowledges a low probability for biohazards, it considers the proposed containment measures to be unsafe. ICAMSR advocates more in situ studies on Mars, and preliminary biohazard testing at the International Space Station before the samples are brought to Earth. DiGregorio also supports a view that several pathogens – such as common viruses – originate in space and probably caused some mass extinctions and pandemics. These claims connecting terrestrial disease and extraterrestrial pathogens have been rejected by the scientific community. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-21] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lava_tube] | [TOKENS: 957] |
Contents Lava tube A lava tube, more rarely called a pyroduct or lava tunnel, is a natural roofed conduit along which molten lava flows from a volcanic vent. If lava in the tube drains out, it will leave an empty cave. Lava tubes are common in low-viscosity volcanic systems. Lava tubes are important as they are able to transport molten lava much further away from the eruptive vent than lava channels. A tube-forming lava flow can emplace on longer distance due to the presence of a solid crust protecting the molten lava from atmospheric cooling. Lava tubes are often considered when preparing hazard maps or managing an eruptive crisis. Formation A lava tube is a type of lava cave formed when a low-viscosity lava flow develops a continuous and hard crust, which thickens and forms a roof above the still-flowing lava stream. Three main formation mechanisms have been described: (1) roofing over a lava channel, (2) pāhoehoe lobe extension or (3) lava flow inflation. Characteristics A broad lava-flow field often forms a lava tube system that consists of a main lava tube and a series of smaller tubes that supply lava to the front of one or more separate flows. When the supply of lava stops at the end of an eruption, or lava is diverted elsewhere, lava in the tube system sometimes drains downslope and leaves partially or fully empty caves. Such drained tubes commonly exhibit numerous internal features that can give information on the activity that happened within the tube. Wall linings are thin layers of lava that cover the walls and ceiling of a tube. They form when the tube drains. Each wall lining corresponds to a cycle of drainage and refilling of the tube. Step marks on the walls indicate the various depths at which the lava flowed. These are known as lava benches, flow ledges or flow lines, depending on how prominently they protrude from the walls. Lava tubes generally have pāhoehoe floors, although this may often be covered in breakdown from the ceiling. A variety of stalactite, generally known as lavacicles, can be observed inside lava tubes. They can be of the splash, "shark tooth", or tubular varieties. Lavacicles are the most common lava tube internal feature. Drip stalagmites may form under tubular lava stalactites, and the latter may grade into a form known as a tubular lava helictite. A runner is a bead of lava that is extruded from a small opening and then runs down a wall. Lava tubes may also contain mineral deposits that most commonly take the form of crusts or small crystals, and less commonly, as stalactites and stalagmites. Some stalagmites may contain a central conduit and are interpreted as hornitos extruded from the tube floor. Lava tubes can be up to 15 meters (50 ft) wide, though are often narrower, and run anywhere from 1 to 15 meters (3 to 50 ft) below the surface. Lava tubes can also be extremely long; one tube from the Mauna Loa 1859 flow enters the ocean about 50 kilometers (30 mi) from its eruption point, and the Cueva del Viento–Sobrado system on Teide, Tenerife island, is over 18 kilometers (11 mi) long, due to extensive braided maze areas at the upper zones of the system. A lava tube system in Kiama, Australia, consists of over 20 tubes, many of which are breakouts of a main lava tube. The largest of these lava tubes is 2 meters (7 ft) in diameter and has columnar jointing due to the large cooling surface. Other tubes have concentric and radial jointing features. These tubes are infilled due to the low slope angle of emplacement. Extraterrestrial lava tubes Lunar lava tubes have been discovered and have been studied as possible human habitats, providing natural shielding from radiation. Several holes on the lunar surface, including one in the Marius Hills region, have been observed with angled satellite imagery to lead into voids wider than the holes themselves. These are considered as possible collapses into lunar lava tubes. Martian lava tubes are associated with innumerable lava flows and lava channels on the flanks of Olympus Mons. Partially collapsed lava tubes are visible as chains of pit craters, and broad lava fans formed by lava emerging from intact, subsurface tubes are also common. Evidence of Martian lava tubes has also been observed on the Southeast Tharsis region and Alba Mons. Caves, including lava tubes, are considered candidate biotopes of interest for extraterrestrial life. Notable examples See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orion_Pictures] | [TOKENS: 6072] |
Contents Orion Pictures Orion Releasing, LLC (doing business as Orion Pictures) is an American film production and distribution company owned by the Amazon MGM Studios subsidiary of Amazon. In its current incarnation, Orion focuses primarily on producing, distributing, and acquiring independent and specialty films made by underrepresented filmmakers. It was founded in 1978 as Orion Pictures Corporation, a joint venture between Warner Bros. and three former senior executives at United Artists (UA). The company produced and released films from 1978 through 1999 and was also involved in television production and syndication in the 1980s and early 1990s. It was one of the largest mini-major studios during its early years, when it worked with prominent directors such as Woody Allen, James Cameron, Jonathan Demme, and Oliver Stone. Four films distributed by Orion won Academy Awards for Best Picture: Amadeus (1984), Platoon (1986), Dances with Wolves (1990), and The Silence of the Lambs (1991). In 1997, Orion was acquired by Metro-Goldwyn-Mayer (MGM), and was folded into MGM in 1999. MGM later revived the Orion name for television in 2013 and relaunched Orion Pictures a year later. In 2022, Amazon acquired Orion when it acquired MGM. History On February 6, 1978, three executives of Transamerica (TA)-owned studio United Artists (UA)—Arthur B. Krim (chairman), Eric Pleskow (president and chief executive officer), and Robert Benjamin (chairman of the finance committee)—quit their jobs. Krim and Benjamin had headed UA since 1951 and subsequently turned around the then-flailing studio with a number of critical and commercial successes. Change had begun once Transamerica purchased UA in 1967 and, within a decade, a rift formed between Krim and Transamerica chairman John R. Beckett concerning the studio's operations. Krim suggested spinning off UA into a separate company which was rejected by Beckett. The last straw came for Pleskow when he refused to collect and deliver the medical records of UA department heads to Transamerica's offices in San Francisco for the sake of confidentiality. The tensions only worsened when Fortune magazine reported an article on the clash between UA and TA in which Beckett had stated that, if the executives disliked the parent company's treatment of them, they should resign. Krim, Benjamin and Pleskow quit UA on January 13, 1978, followed by the exits of senior vice presidents William Bernstein and Mike Medavoy three days later. The week following the resignations, according to the website Reference for Business, 63 important Hollywood figures took out an advertisement in a trade paper warning Transamerica that it had made a fatal mistake in letting the five men leave. The 'fatal mistake' came true following the box-office disaster of Heaven's Gate in 1980 which led to Transamerica selling UA to Metro-Goldwyn-Mayer (MGM). That same year, the five men forged a deal with Warner Bros. The executives formed Orion Pictures Company, named after the constellation which they claimed had five main stars (it actually has seven or eight). The new company intended only to finance projects, giving the filmmakers complete creative autonomy; this ideal had been successfully implemented at United Artists. Orion held a $100 million line of credit and its films would be distributed by the Warner Bros. studio. Orion, however, was contractually given free rein over distribution and advertising as well as the number and type of films the executives chose to invest in. In late March 1978, Orion signed its first contract, a two-picture deal with John Travolta's production company. Contracts with actress and director Barbra Streisand; actors James Caan, Jane Fonda, Peter Sellers, Jon Voight, and Burt Reynolds; directors Francis Ford Coppola and Blake Edwards; writer/director John Milius; singer Peter Frampton; and producer Ray Stark soon materialized. Orion also developed a co-financing and distribution deal with EMI Films. In its first year, Orion had fifteen films in production and had a dozen more actors, directors and producers lining up to sign with them. Benjamin died in October 1979. Orion's first film, A Little Romance, was released in April that year. Later that year, Orion released Blake Edwards' 10 which became a commercial success, the first for Edwards in over a decade (aside from installments of The Pink Panther franchise). Other films released by Orion over the next two years included a few successes such as Caddyshack (1980) and Arthur (1981); critically praised but underperforming films such as The Great Santini (1979), an adaptation of a Pat Conroy novel, and Sidney Lumet's Prince of the City (1981); and pictures by young writer-directors such as Philip Kaufman's The Wanderers (1979) and Nicholas Meyer's debut Time After Time (1979); plus Monty Python's Life of Brian (1979) which Orion only distributed in the United States. Out of the 23 films Orion released between April 1979 and December 1981, only a third of them made a profit. Orion executives were conflicted over financing big-budgeted films and passed on Raiders of the Lost Ark (1981) for that reason. By early 1982, Orion had severed its distribution ties with Warner Bros. As part of the deal, the rights to Orion's films made up to that point were sold to Warner Bros. Orion was now looking to have its own distribution network by acquiring another company with such capabilities. The four partners looked into Allied Artists and Embassy Pictures before settling on Filmways. Orion subsequently purchased Filmways and reorganized the flailing company. New employees were hired and all of Filmways' non-entertainment assets (Grosset & Dunlap and Broadcast Electronics) were sold off. Another result of the merger was that Orion entered television production. Orion's biggest television hit was Cagney & Lacey, which lasted seven seasons on CBS. In 1983, Orion Pictures introduced art-house division Orion Classics with executives who had previously run United Artists Classics. Out of the initial 18 films released by the firm under the name of Orion Pictures Corporation, ten made profits, five just managed to cover their costs, and three suffered losses under $2 million. One such film, Francis Ford Coppola's The Cotton Club, was mired in legal troubles and Orion lost $3 million of its investment. "We've had some singles and doubles [but haven't] had any home runs," lamented Krim. In September 1984, Orion distributed Amadeus, which garnered many accolades, winning eight Academy Awards, including Best Picture. That year, on April 3, 1984, Orion Pictures launched Orion Entertainment Group, that would consist of four groups, Orion Television, Orion Home Video, Orion Pay Television and Orion Television Syndication, and the new organization would produce and distribute product for television, home video, pay and syndicated markets, with Jamie Kellner serving as president. On October 26, 1984, the company released the James Cameron-directed science fiction film The Terminator which was well received by critics and audience and led to a franchise involving five further films. However, Orion distributed none of the follow-ups. For Orion, 1985 was a dismal year. All but two films, Desperately Seeking Susan and Code of Silence, made less than $10 million at the United States box office, including an unsuccessful attempt at a James Bond-type franchise, Remo Williams: The Adventure Begins. Orion's haphazard distribution channels and unsuccessful advertising campaigns made it impossible to achieve a hit. Another factor was that Orion was about to venture into the video business and stopped selling home-use rights to its films. Furthermore, the production of the Rodney Dangerfield comedy Back to School was put on hold when a co-producer died, taking the film off of its Christmas 1985 release slate. In January 1986, Mario Kassar and Andrew Vajna, producers of the Rambo films (the first film, First Blood, was distributed by Orion) attempted to buy $55 million worth of the studio's stock through the duo's company, Anabasis. Had they succeeded, Kassar and Vajna would have controlled the board and laid off every executive save for Krim. Warburg Pincus subsequently limited its 20% stake in Orion to 5%; the remaining stock was acquired by Viacom International. Viacom hoped to use Orion's product for its pay-television channel Showtime. Orion expanded into home video distribution with the formation of Orion Home Entertainment Corporation in 1985, which began distributing videos under the Orion Home Video label in 1987 (before OHV's formation, HBO Video and their predecessors, as well as former Orion's partner Warner Home Video, Vestron Video and Embassy Home Entertainment, had been responsible for home media releases of Orion product). On May 22, 1986, a 6.5% stake in Orion was purchased by Metromedia, a television and communications company controlled by billionaire (and a friend of Krim's) John Kluge. Metromedia had just divested its television station group to Rupert Murdoch's News Corporation (which would form what is now the Fox network). Kluge's investment in Orion came at the right time; Back to School was a success that earned $90 million at the box office. By March 1987, the studio's fortunes had increased dramatically with a succession of critical and commercial hits, including Platoon (which ultimately won a Best Picture Oscar), Woody Allen's Hannah and Her Sisters, and the sports film Hoosiers. Orion's 1986 offerings drew 18 Academy Award nominations, more than any other studio. In 1987, Orion achieved further success with RoboCop and No Way Out. By this time, Orion's television division had expanded into the lucrative syndicated game show market under the name Century Towers Productions, a reference to Orion's street address. It produced revivals of format inherited from Heatter-Quigley Productions, owned since the late 1960s by Filmways; this included The New Hollywood Squares, which ran from 1986 to 1989, and a revival of High Rollers that aired in the 1987–88 season. 1987 also saw the arrival of former CBS/Fox Video executive Len White, who became president and CEO of Orion Home Video, with plans to release its first home video titles in the third or fourth quarter of that year; he reported to Larry Hilford, who joined the home video division two years earlier. In January 1987, Kluge faced competition with the arrival of Sumner Redstone, whose theater chain, National Amusements, purchased 6.42% of Orion's stock. National Amusements later acquired Viacom, increasing their Orion stake to 21%, then 26%. Soon Kluge started buying more Orion stock, touching off a battle with Redstone over control of the company. Kluge won on May 20, 1988, when Metromedia took over about 67% of Orion. One analyst told The Wall Street Journal: "This amount is probably so small to Kluge it doesn't matter. He probably burns that up in a weekend." In 1989, Orion suffered from a disastrous slate of films, placing dead last among larger Hollywood studios by box office revenue. Among its biggest flops that year were Great Balls of Fire!, a biography of Jerry Lee Lewis starring Dennis Quaid and Winona Ryder; She-Devil, a dark comedy starring Meryl Streep and Roseanne Barr; Speed Zone, an action-comedy vehicle for SCTV alumni John Candy, Joe Flaherty, and Eugene Levy; and Miloš Forman's adaptation of Les Liaisons dangereuses, Valmont, which competed with Dangerous Liaisons, also based on the same source material. Test screenings of the "Weird Al" Yankovic comedy UHF were so strong that Orion had high expectations for it, but it flopped at the box office (though it later developed a cult following on video). Also that year, it signed a deal with Nelson Entertainment to distribute titles on videocassette and theatrically. In February 1990, Orion signed a deal with Columbia Pictures Entertainment in which the much larger studio would pay Orion $175 million to distribute Orion's movies and television programs overseas. Orion had previously licensed its films to individual distributors territory by territory. That same month, Mike Medavoy left Orion and became head of Tri-Star Pictures. The box-office returns for Orion's 1990 releases were just as dismal, with failures in The Hot Spot and State of Grace. The only bright spot was Kevin Costner's western epic Dances with Wolves, which won seven Academy Awards, including Best Picture, and grossed $400 million worldwide. A few months later, Orion garnered another winner with The Silence of the Lambs, but these two films could not make up for years of losses. Only Kluge's continued infusions of cash were enough to keep the company afloat, but soon he had enough. Kluge first attempted to sell Orion to businessman (and former 20th Century Fox owner) Marvin Davis. Sony, which had recently purchased Columbia Pictures, was also interested. When those talks fell through, Kluge took drastic steps. First, Orion shut down production. Second, Kluge ordered the sale of several projects, such as The Addams Family (which went to Paramount, though the international rights to the film were retained by Orion), in order to accumulate much-needed cash. Finally, in the spring of 1991, Kluge's people took over the company, leading to the departure of Arthur Krim. Orion's financial problems were so severe, that at the 63rd Annual Academy Awards in March 1991, host Billy Crystal made reference to Orion's debt in his opening monologue, joking that "Reversal of Fortune [is] about a woman in a coma, Awakenings [is] about a man in a coma; and Dances with Wolves [was] released by Orion, a studio in a coma." It was during this time that ABC stepped in to co-finance and assume production over many of Orion Television's shows it had in production, such as American Detective and Equal Justice. After Orion had to shut the television division down, this resulted in projects like The Chuck Woolery Show, which was planned to be produced by Orion, instead having to find new production companies (such as Group W Productions in the case of Woolery). Gary Nardino, former employee of Orion Television Entertainment, moved on to producing for Lorimar Television, taking some of Orion's projects with him, including Bill & Ted's Excellent Adventures on Fox, and Hearts are Wild, a co-production with Spelling Television, for CBS; talent deals Orion Television had at the time (with Thomas Carter, Robert Townsend, Paul Stajonovich, Clifton Campbell and Deborah Joy Levine) were also taken by Nardino to Lorimar. On November 25, 1991, Orion sold its Hollywood Squares format rights to King World Productions after Orion closed down its television division. On December 11, 1991, Orion filed for Chapter 11 bankruptcy protection. That same month, Orion was in talks with New Line Cinema, a successful independent film company, to acquire the bankrupt studio. By the following April, Orion and New Line Cinema cancelled their plans on the issue of price. Republic Pictures and the then-new Savoy Pictures also attempted to buy Orion, but no deal materialized. In February 1992, Bernstein, who was president and chief executive of Orion at that point, resigned from the studio, Bernstein would go on to become executive vice president at Paramount Pictures. At the Academy Awards ceremony, broadcast on March 30, 1992, Crystal made another reference to Orion, this time about its demise: Take a great studio like Orion: a few years ago Orion released Platoon, it wins Best Picture. Amadeus, Best Picture. Last year, they released Dances with Wolves wins Best Picture. This year The Silence of the Lambs is nominated for Best Picture. And they can't afford to have another hit! But there is good news and bad news. The good news is that Orion was just purchased, and the bad news is it was bought by the House of Representatives. The Silence of the Lambs swept all five major Academy Awards; however, a majority of key executives, as well as the talent they had deals with, had left the studio. Hollywood observers had doubts that Orion would be resurrected to its former glory. In May 1992, it was reported that Pleskow was resigning from Orion on July 1 of that year. stating in the New York Times: "There is little for me to do at this point". On November 5, 1992, Orion reemerged from bankruptcy. Its reorganization plan would allow for Orion to continue producing and releasing films, but financing for the features would be provided by outside sources, with the studio purchasing the distribution rights to them after their completion. Orion's bankruptcy also delayed the release of many films the studio had produced or acquired, among them: Love Field (1992), RoboCop 3 (1993), The Dark Half (1993), Blue Sky (1994), Car 54, Where Are You? (1994), Clifford (1994), The Favor (1994), and There Goes My Baby (1994). Orion started releasing these films after their reorganization. Blue Sky won star Jessica Lange an Academy Award for Best Actress in 1995. In August 1994, Orion Home Video partnered with Streamline Pictures in distributing the latter's licensed anime video titles to general retailers, which animation historian Fred Patten considered a major development in anime's growing popularity in American pop culture. In November 1995, Orion, two other companies controlled by Kluge, and film and television house MCEG Sterling (producer of the Look Who's Talking series) were merged to form the Metromedia International Group. Few of the films released during the four years after bankruptcy protection were successful either critically or commercially. In 1996, Metromedia acquired production company Motion Picture Corporation of America, and installed its heads, Brad Krevoy and Steve Stabler, as co-presidents of Orion. Both received a six picture put picture distribution deal as a part of their contracts. In the years ahead, Orion produced very few films, and primarily released films from other producers, including LIVE Entertainment. Orion Classics, minus its founders (who had moved to Sony Pictures Entertainment and founded Sony Pictures Classics), continued to acquire popular art-house films, such as Boxing Helena (1993), before Metromedia merged the subsidiary with Samuel Goldwyn Entertainment in 1996. In July 1997, Metromedia shareholders approved the sale of Orion Pictures (as well as Samuel Goldwyn Entertainment and Motion Picture Corporation of America) to Metro-Goldwyn-Mayer (MGM). This led to the withdrawal of 85 employees, including Krevoy and Stabler, while 111 other employees were to be laid off within nine months, leaving 25 of them to work at MGM. Orion Pictures also brought with it a two-thousand film library, ten completed movies and five direct-to-video features for future release and the Krevoy and Stabler movie put picture distribution deal. Krevoy and Stabler retained the right to the Motion Picture Corporation of America name and their three top movies. Metromedia retained Goldwyn Entertainment's Landmark Theatre Group. The remaining Orion Pictures films released in 1998 and 1999 were originally shot in 1997 at the latest, with One Man's Hero (1999) being the last film released by Orion Pictures for 15 years. MGM kept Orion Pictures intact as a corporation, mostly to avoid its home video distribution agreement with Warner Home Video and began distributing Orion Pictures films under the Orion Home Video label. MGM acquired the two thirds of the pre-1996 PolyGram Filmed Entertainment library (which included the Epic film library) from Seagram in 1999 for $250 million, increasing their library holdings to 4,000. The PolyGram libraries were purchased by its Orion Pictures subsidiary so as to avoid its 1990 home video distribution agreement with Warner Home Video. In March 1999, MGM bought out its distribution contract with Warner Home Video for $225 million, effectively ending the distribution problem. In 2013, Orion returned to television production (after its original television unit was shut down during its bankruptcy period) with a new syndicated court show, Paternity Court. The Orion Pictures name, also as Orion Releasing, was extended in fourth quarter 2014 for smaller multi-platform video on demand and limited theatrical distribution. Its name was first seen again on September 10, 2014, in front of the trailer for The Town That Dreaded Sundown that was released in October. The label's first release was the Brazilian film Vestido pra Casar. In September 2015, Entertainment One Films relaunched the Momentum Pictures banner with an announced deal with Orion Pictures to co-acquire and co-distribute films in the United States and Canada, and selected foreign markets, such as the United Kingdom (Momentum's country of origin). The initial films under the deal were The Wannabe, Fort Tilden and Balls Out. Other films released by Orion Pictures and Momentum Pictures include Pocket Listing and Diablo. Starting in September 2016 with Burn Country, Orion Pictures and Samuel Goldwyn Films paired in acquiring several films. Orion Television launched a second court show in the fall of 2017, Couples Court With The Cutlers, which features married couple Keith and Dana Cutler presiding over romantic and domestic disputes. On September 6, 2017, MGM officially revitalized the Orion Pictures brand as a standalone, US theatrical marketing and distribution arm with the hiring of John Hegeman, who joined from Blumhouse Tilt (distributor of Orion's The Town That Dreaded Sundown and The Belko Experiment) and incidentally got his start at the original Orion in the 1980s. Hegeman would serve as president of the expanded label and report to Jonathan Glickman, president of MGM's motion picture group. Under his leadership, the "new" Orion will produce, market and distribute four to six modestly budgeted films a year across genres and platforms, and both wide and limited releases for targeted audiences. Its first release, the young adult romance drama Every Day, was released on February 23, 2018. In May 2018, it was announced that Orion Classics would be revived as a multiplatform distribution label, with 8 to 10 films being released per year. On February 5, 2019, MGM and Annapurna Pictures expanded their US joint distribution venture Mirror, rebranding it as United Artists Releasing. Beginning in April 2019, Orion Pictures' upcoming titles would be distributed through the UAR banner and Orion's theatrical distribution staff will move to UAR. The first Orion film to do so was the remake of Child's Play, which was released on June 21, 2019. On August 20, 2020, it was announced that Orion would be relaunched again, with its focus shifting to films made by underrepresented filmmakers (including people of color, women, the LGBT community and people with disabilities) as part of the efforts to increase inclusivity in the film industry, both in front of and behind the camera, with the hiring of Alana Mayo as the president, replacing Hegeman by October. The first film released with this new focus was Anything's Possible (previously titled What If?), a coming-of-age drama directed by Billy Porter in his directorial debut. This effort continued in 2021 when they, along with Annapurna, acquired the US distribution rights to On the Count of Three two weeks after it premiered at the 2021 Sundance Film Festival. On May 17, 2021, online shopping company Amazon entered negotiations to acquire MGM and even made a bid for about $9 billion, with the intention to own the studio's library, including Orion's films, to grow the Amazon Prime Video catalog. The negotiations were made with Anchorage Capital Kevin Ulrich. On May 26, 2021, it was officially announced that MGM would be acquired by Amazon for $8.45 billion. The merger was finalized on March 17, 2022. On March 4, 2023, Amazon shut down UAR's operations and folded them into MGM, resulting in MGM becoming Orion's new domestic distributor, with Warner Bros. Pictures becoming the studio's new international distributor. In May 2023, Amazon Studios created Amazon MGM Studios Distribution, an international film and television distribution unit for both MGM and Amazon projects, which will include new projects from Orion. On September 17, 2023, American Fiction became the studio's first film to win the People's Choice Award at that year's Toronto International Film Festival. Film library During the 1980s and early 1990s, Orion's output included Woody Allen films, Hollywood blockbusters such as the first Terminator and the RoboCop films, comedies such as Throw Momma from the Train, Dirty Rotten Scoundrels, Caddyshack, Something Wild, UHF, and the Bill & Ted films, and Best Picture Academy Award winners Amadeus, Platoon, Dances with Wolves, and The Silence of the Lambs. Following Amazon’s purchase of MGM Holdings, Orion earned three consecutive Best Picture Academy Award nominations with Women Talking (2022), American Fiction (2023), and Nickel Boys (2024). Following is a list of the major Academy Awards (Picture, Director, two Screenplay and four Acting awards) for which Orion films were nominated. Worldwide Gross Almost all of Orion's post-1982 releases, as well as most of the AIP and Filmways backlogs and all of the television output originally produced and distributed by Orion Television, now bear the MGM name. However, in most cases, the 1980s Orion logo has been retained or added, in the case of the Filmways and AIP libraries. Most ancillary rights to Orion's back catalog from the 1978–1982 joint venture period remain with Warner Bros., including such films as 10 (1979), Caddyshack (1980), Arthur (1981), Excalibur (1981), and Prince of the City (1981). Some post-1982 films originally released by Orion—Lionheart (1987), The Unbearable Lightness of Being (1988), and Amadeus (1984) (the latter two being Saul Zaentz productions)—are currently distributed by Warner Bros. as well. HBO also owns video distribution rights to Three Amigos (1986), as they co-produced the film and owns pay-TV rights. However, MGM owns all other rights and the film's copyright.[citation needed] The Wanderers is owned by the film's producers; however, the copyright is held by MGM/Orion. Orion also retains a controlling interest in The Cotton Club, although major rights are now with Lionsgate, which owns the library of presenting studio Zoetrope Corporation. Woody Allen's films A Midsummer Night's Sex Comedy (1982) and Zelig (1983) are the only Orion films from the original joint venture period now owned by MGM, as the rights for them remained with Allen, who sold them to MGM in 2000. Orion releases produced by the Hemdale Film Corporation and Nelson Entertainment are included in MGM's library as well, and are incorporated into the Orion library. MGM did not acquire the Hemdale films (which include The Terminator, Hoosiers, and Platoon) or the Nelson films (including the Bill & Ted films) until MGM bought the pre-1996 library of PolyGram Filmed Entertainment (the "Epic library"), which included both companies' libraries, although the television and digital rights to certain Nelson films are now held by Paramount Television (the result of a pre-existing deal Nelson had with Viacom), with television syndication handled on behalf of Paramount Television by Trifecta Entertainment & Media. Many of the film and television holdings of The Samuel Goldwyn Company have now also been incorporated into the Orion library (with ownership currently held by MGM), and the copyright on some of this material is held by Orion, except The New Adventures of Flipper now carries the MGM Television Entertainment copyright.[citation needed] MGM still holds distribution rights to the 1980s revival of Hollywood Squares and High Rollers the company produced, as well as the remnants of the Heatter-Quigley library that was not erased, including all remaining episodes of the original Squares; they do not own the rights to the format, which is currently owned by CBS Television Distribution, successor-in-interest to King World, who purchased the format rights in 1991 and produced another syndicated revival from 1998 to 2004. Orion distributed the first Rambo film, First Blood (1982). That film, like the rest of the Rambo franchise, is now owned by StudioCanal as a result of purchasing the library of its co-distributor, Carolco Pictures. References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars_2020] | [TOKENS: 4895] |
Contents Mars 2020 Mars 2020 is a NASA mission that includes the rover Perseverance, the now-grounded small robotic helicopter Ingenuity, and associated delivery systems, as part of the Mars Exploration Program. Mars 2020 was launched on an Atlas V rocket at 11:50:01 UTC on July 30, 2020, and landed in the Martian crater Jezero on February 18, 2021, with confirmation received at 20:55 UTC. On March 5, 2021, NASA named the landing site Octavia E. Butler Landing. As of 21 February 2026, Perseverance has been on Mars for 1779 sols (1829 total days; 5 years, 3 days). Ingenuity operated on Mars for 1042 sols (1071 total days; 2 years, 341 days) before sustaining serious damage to its rotor blades, possibly all four, causing NASA to retire the craft on January 25, 2024. Perseverance is investigating an astrobiologically relevant ancient environment on Mars for its surface geological processes and history, and assessing its past habitability, the possibility of past life on Mars, and the potential for preservation of biosignatures within accessible geological materials. It will cache sample containers along its route for retrieval by a potential future Mars sample-return mission. The Mars 2020 mission was announced by NASA in December 2012 at the fall meeting of the American Geophysical Union in San Francisco. Perseverance's design is derived from the rover Curiosity, and it uses many components already fabricated and tested in addition to new scientific instruments and a core drill. The rover also employs nineteen cameras and two microphones, allowing for the audio recording of the Martian environment. On April 30, 2021, Perseverance became the first spacecraft to hear and record another spacecraft, the Ingenuity helicopter, on another planet. The launch of Mars 2020 was the third of three space missions sent toward Mars during the July 2020 Mars launch window, with missions also launched by the national space agencies of the United Arab Emirates (the Emirates Mars Mission with the orbiter Hope on July 19, 2020) and China (the Tianwen-1 mission on July 23, 2020, with an orbiter, deployable and remote cameras, lander, and Zhurong rover). Conception The Mars 2020 mission was announced by NASA on December 4, 2012, at the fall meeting of the American Geophysical Union in San Francisco. The selection of Mars as the target of NASA's flagship mission elicited surprise from some members of the scientific community. Some criticized NASA for continuing to focus on Mars exploration instead of other Solar System destinations in constrained budget times. Support came from California U.S. Representative Adam Schiff, who said he was interested in the possibility of advancing the launch date, which would enable a larger payload. Science educator Bill Nye endorsed the Mars sample-return role, saying this would be "extraordinarily fantastic and world-changing and worthy." Objectives The mission is aimed at seeking signs of habitable conditions on Mars in the ancient past, and also at searching for evidence—or biosignatures—of past microbial life, and water. The mission was launched July 30, 2020, on an Atlas V-541, and the Jet Propulsion Laboratory manages the mission. The mission is part of NASA's Mars Exploration Program. The Science Definition Team proposed that the rover collect and package as many as 31 samples of rock cores and surface soil for a later mission to bring back for definitive analysis on Earth. In 2015, they expanded the concept, planning to collect even more samples and distribute the tubes in small piles or caches across the surface of Mars. In September 2013, NASA launched an Announcement of Opportunity for researchers to propose and develop the instruments needed, including the Sample Caching System. The science instruments for the mission were selected in July 2014 after an open competition based on the scientific objectives set one year earlier. The science conducted by the rover's instruments will provide the context needed for detailed analyses of the returned samples. The chairman of the Science Definition Team stated that NASA does not presume that life ever existed on Mars, but given the recent Curiosity rover findings, past Martian life seems possible. The Perseverance rover will explore a site likely to have been habitable. It will seek signs of past life, set aside a returnable cache with the most compelling rock core and soil samples, and demonstrate the technology needed for the future human and robotic exploration of Mars. A key mission requirement is that it must help prepare NASA for its long-term Mars sample-return mission and crewed mission efforts. The rover will make measurements and technology demonstrations to help designers of a future human expedition understand any hazards posed by Martian dust, and will test technology to produce a small amount of pure oxygen (O2) from Martian atmospheric carbon dioxide (CO2). Improved precision landing technology that enhances the scientific value of robotic missions also will be critical for eventual human exploration on the surface. Based on input from the Science Definition Team, NASA defined the final objectives for the 2020 rover. Those became the basis for soliciting proposals to provide instruments for the rover's science payload in the spring of 2014. The mission will also attempt to identify subsurface water, improve landing techniques, and characterize weather, dust, and other potential environmental conditions that could affect future astronauts living and working on Mars. A key mission requirement for this rover is that it must help prepare NASA for its Mars sample-return mission (MSR) campaign, which is needed before any crewed mission takes place. Such effort would require three additional vehicles: an orbiter, a fetch rover, and a two-stage, solid-fueled Mars ascent vehicle (MAV). Between 20 and 30 drilled samples will be collected and cached inside small tubes by the Perseverance rover, and will be left on the surface of Mars for possible later retrieval by NASA in collaboration with ESA. A "fetch rover" would retrieve the sample caches and deliver them to a two-stage, solid-fueled Mars ascent vehicle (MAV). In July 2018, NASA contracted Airbus to produce a "fetch rover" concept study. The MAV would launch from Mars and enter a 500 km orbit and rendezvous with the Next Mars Orbiter or Earth Return Orbiter. The sample container would be transferred to an Earth entry vehicle (EEV) which would bring it to Earth, enter the atmosphere under a parachute and hard-land for retrieval and analyses in specially designed safe laboratories. In the first science campaign Perseverance performs an arching drive southward from its landing site to the Séítah unit to perform a "toe dip" into the unit to collect remote-sensing measurements of geologic targets. After that she will return to the Crater Floor Fractured Rough to collect the first core sample there. Passing by the Octavia B. Butler landing site concludes the first science campaign. The second campaign shall start with several months of travel towards the "Three Forks" where Perseverance can access geologic locations at the base of the ancient delta of Neretva river, as well as ascend the delta by driving up a valley wall to the northwest. At a rock named "Wildcat Ridge" located within Jezero's well-preserved sedimentary fan deposit, Perseverance found evidence for an ancient lake environment. Not only were these sediments likely deposited in a standing body of water, but they also continued to interact with water long after they were formed. The environments recorded within the rocks at Wildcat Ridge would have been habitable for ancient microbial life, and this type of rock is ideal for preserving possible signs of ancient life. They also found that "sediments entering Jezero's lake were deposited in a delta" and "evidence for late-stage, high-energy flooding that carried large boulders into the crater." The MOXIE experiment produced 122 grams of oxygen from carbon dioxide. The microphone studies showed that the speed of sound is slower and the volumes of sounds transmitted through the atmosphere is lower, than on Earth. PIXL found that the Seitah formation and a rock at "Otis Peak" contained olivine, phosphates, sulfates, clays, carbonate minerals, silicate minerals, "augite pyroxene, feldspathic mesostasis, various Fe,Cr,Ti-spinels, and merrillite", perchlorate, feldspar, magnesite, siderite, oxides, as well as minerals with composition including magnesium, iron, chlorine, and sodium. RIMFAX revealed findings "consistent with a subsurface dominated by solid rock and mafic material" and that "the crater floor experienced a period of erosion before the deposition of the overlying delta strata. The regularity and horizontality of the basal delta sediments observed in the radar cross sections indicate that they were deposited in a low-energy lake environment." Spacecraft The three major components of the Mars 2020 spacecraft are the 539 kg (1,188 lb) cruise stage for travel between Earth and Mars; the Entry, Descent, and Landing System (EDLS) that includes the 575 kg (1,268 lb) aeroshell descent vehicle + 440 kg (970 lb) heat shield; and the 1,070 kg (2,360 lb) (fueled mass) descent stage needed to deliver Perseverance and Ingenuity safely to the Martian surface. The Descent Stage carries 400 kg (880 lb) landing propellant for the final soft landing burn after being slowed down by a 21.5 m (71 ft)-wide, 81 kg (179 lb) parachute. The 1,025 kg (2,260 lb) rover is based on the design of Curiosity. While there are differences in scientific instruments and the engineering required to support them, the entire landing system (including the descent stage and heat shield) and rover chassis could essentially be recreated without any additional engineering or research. This reduces overall technical risk for the mission, while saving funds and time on development. One of the upgrades is a guidance and control technique called "Terrain Relative Navigation" (TRN) to fine-tune steering in the final moments of landing. This system allowed for a landing inside 7.7 km × 6.6 km (4.8 mi × 4.1 mi) wide ellipse with a positioning error within 40 m (130 ft) and avoided obstacles. This is a marked improvement from the Mars Science Laboratory mission that had an elliptical area of 7 by 20 km (4.3 by 12.4 mi). In October 2016, NASA reported using the Xombie rocket to test the Lander Vision System (LVS), as part of the Autonomous Descent and Ascent Powered-flight Testbed (ADAPT) experimental technologies, for the Mars 2020 mission landing, meant to increase the landing accuracy and avoid obstacle hazards. Perseverance rover Perseverance was designed with help from Curiosity's engineering team, as both are quite similar and share common hardware. Engineers redesigned Perseverance's wheels to be more robust than Curiosity's, which, after kilometres of driving on the Martian surface, have shown progressed deterioration. Perseverance will have thicker, more durable aluminium wheels, with reduced width and a greater diameter, 52.5 cm (20.7 in), than Curiosity's 50 cm (20 in) wheels. The aluminium wheels are covered with cleats for traction and curved titanium spokes for springy support. The combination of the larger instrument suite, new Sampling and Caching System, and modified wheels makes Perseverance 14 percent heavier than Curiosity, at 1,025 kg (2,260 lb) and 899 kg (1,982 lb), respectively. The rover will include a five-jointed robotic arm measuring 2.1 m (6 ft 11 in) long. The arm will be used in combination with a turret to analyze geologic samples from the Martian surface. A Multi-Mission Radioisotope Thermoelectric Generator (MMRTG), left over as a backup part for Curiosity during its construction, was integrated onto the rover to supply electrical power. The generator has a mass of 45 kg (99 lb) and contains 4.8 kg (11 lb) of plutonium dioxide as the source of steady supply of heat that is converted to electricity. The electrical power generated is approximately 110 watts at launch with little decrease over the mission time. Two lithium-ion rechargeable batteries are included to meet peak demands of rover activities when the demand temporarily exceeds the MMRTG's steady electrical output levels. The MMRTG offers a 14-year operational lifetime, and it was provided to NASA by the United States Department of Energy. Unlike solar panels, the MMRTG does not rely on the presence of the Sun for power, providing engineers with significant flexibility in operating the rover's instruments even at night and during dust storms, and through the winter season. The Norwegian-developed radar RIMFAX is one of the seven instruments that have been placed on board. The radar has been developed together with FFI (Norwegian Defence Research Establishment), led by Principal Investigator Svein-Erik Hamran of FFI, the Norwegian Space Center, and a number of Norwegian companies. Space has also been found for the first time for an uncrewed helicopter, which will be controlled by NTNU (Norwegian University of Science and Technology) trained cybernetics engineer Håvard Fjær Grip and his team at NASA's Jet Propulsion Laboratory in Los Angeles. Each Mars mission contributes to an ongoing innovation chain. Each draws on prior operations or tested technologies and contributes uniquely to upcoming missions. By using this strategy, NASA is able to advance the frontiers of what is currently feasible while still depending on earlier advancements.[citation needed] The Curiosity rover, which touched down on Mars in 2012, is directly responsible for a large portion of Perseverance's rover design, including its entry, descent, and landing mechanism. With Perseverance, new technological innovations will be demonstrated, and entry, descent, and landing capabilities will be improved. These advancements will help open the door for future robotic and human missions to the Moon and Mars.[citation needed] Ingenuity was the robotic coaxial helicopter that made the first aircraft flights on another planet. It was deployed from the underside of Perseverance and used autonomous control guided by flight plan instructions uploaded from mission control. After each landing, it transmitted photographs and other data to Perseverance, which relayed the information to Earth. The program was originally designed to perform only five hops, but the helicopter flew 72 times over three years until NASA ended its mission on January 25, 2024. NASA has plans to build on the helicopter's design for future Mars missions. Mission The mission is centered around exploring Jezero crater, which scientists speculate was a 250 m (820 ft) deep lake about 3.9 billion to 3.5 billion years ago. Jezero today features a prominent river delta where water flowing through it deposited much sediment over the eons, which is "extremely good at preserving biosignatures". The sediments in the delta likely include carbonates and hydrated silica, known to preserve microscopic fossils on Earth for billions of years. Prior to the selection of Jezero, eight proposed landing sites for the mission were under consideration by September 2015; Columbia Hills in Gusev crater, Eberswalde crater, Holden crater, Jezero crater, Mawrth Vallis, Northeastern Syrtis Major Planum, Nili Fossae, and Southwestern Melas Chasma. A workshop was held on February 8–10, 2017, in Pasadena, California, to discuss these sites, with the goal of narrowing down the list to three sites for further consideration. The three sites chosen were Jezero crater, Northeastern Syrtis Major Planum, and Columbia Hills. Jezero crater was ultimately selected as the landing site in November 2018. The "fetch rover" for returning the samples is expected to launch in 2026. The landing and surface operations of the "fetch rover" would take place early in 2029. The earliest return to Earth is envisaged for 2031. Launch and cruise The launch window, when the positions of Earth and Mars were optimal for traveling to Mars, opened on July 17, 2020, and lasted through August 15, 2020. The rocket was launched on July 30, 2020, at 11:50 UTC, and the rover landed on Mars on February 18, 2021, at 20:55 UTC, with a planned surface mission of at least one Mars year (668 sols or 687 Earth days). Two other missions to Mars were launched in this window: the United Arab Emirates Space Agency launched its Emirates Mars Mission with the Hope orbiter on July 20, 2020, which arrived in Mars orbit on February 8, 2021, and China National Space Administration launched Tianwen-1 on July 23, 2020, arriving in orbit on February 10, 2021, and successfully soft landed with the Zhurong rover on May 14, 2021. NASA announced that all of the trajectory correction maneuvers (TCM) were a success. The spacecraft fired thrusters to adjust its course toward Mars, shifting the probe's initial post-launch aim point onto the Red Planet. Entry, descent, and landing (EDL) Prior to landing, the Science Team from an earlier NASA lander, InSight, announced that they would attempt to detect the entry, descent and landing (EDL) sequence of the Mars 2020 mission using InSight's seismometers. Despite being more than 3,400 km (2,100 mi) away from the Mars landing site, the team indicated that there was a possibility that InSight's instruments would be sensitive enough to detect the hypersonic impact of Mars 2020's cruise mass balance devices with the Martian surface. The rover's landing was planned similar to the Mars Science Laboratory used to deploy Curiosity on Mars in 2012. The craft from Earth was a carbon fiber capsule that protected the rover and other equipment from heat during entry into the Mars atmosphere and initial guidance towards the planned landing site. Once through, the craft jettisoned the lower heat shield and deployed a parachute from the backshell to slow the descent to a controlled speed. With the craft moving under 320 km/h (200 mph) and about 1.9 km (1.2 mi) from the surface, the rover and sky crane assembly detached from the backshell, and rockets on the sky crane controlled the remaining descent to the planet. As the sky crane moved closer to the surface, it lowered Perseverance via cables until it confirmed touchdown, detached the cables, and flew a distance away to avoid damaging the rover. Perseverance successfully landed on the surface of Mars with help of the sky crane on February 18, 2021, at 20:55 UTC, to begin its science phase, and began sending images back to Earth. Ingenuity reported back to NASA via the communications systems on Perseverance the following day, confirming its status. The helicopter was not expected to be deployed for at least 60 days into the mission. NASA also confirmed that the on-board microphone on Perseverance had survived entry, descent and landing (EDL), along with other high-end visual recording devices, and released the first audio recorded on the surface of Mars shortly after landing, capturing the sound of a Martian breeze as well as a hum from the rover itself. On May 7, 2021, NASA confirmed that Perseverance managed to record both audio and video from Ingenuity's fourth flight which took place on April 30, 2021. Major mission milestones and works Gallery In support of the NASA-ESA Mars Sample Return, rock, regolith (Martian soil), and atmosphere samples are being cached by Perseverance. As of July 2025,[update] 33 out of 43 sample tubes have been filled, including 8 igneous rock samples, 13 sedimentary rock sample tubes, 3 Igneous/Impactite rock sample tubes, a Serpentinite rock sample tube, a Silica-cemented carbonate rock sample tube, two regolith sample tubes, an atmosphere sample tube, and three witness tubes. Before launch, 5 of the 43 tubes were designated "witness tubes" and filled with materials that would capture particulates in the ambient environment of Mars. Out of 43 tubes, 3 witness sample tubes will not be returned to Earth and will remain on rover as the sample canister will only have 30 tube slots. Further, 10 of the 43 tubes are left as backups at the Three Forks Sample Depot. Cost NASA plans to expend roughly US$2.8 billion on the Mars 2020 mission over 10 years: almost $2.2 billion on the development of the Perseverance rover, $80 million on the Ingenuity helicopter, $243 million for launch services, and $296 million for 2.5 years of mission operations. Adjusted for inflation, Mars 2020 is the sixth-most expensive robotic planetary mission made by NASA and is cheaper than its predecessor, the Curiosity rover. As well as using spare hardware, Perseverance also used designs from Curiosity's mission without needing to redesign them, which helped save "probably tens of millions, if not 100 million dollars" according to Mars 2020 Deputy Chief Engineer Keith Comeaux. Public outreach To raise public awareness of the Mars 2020 mission, NASA undertook a "Send Your Name To Mars" campaign, through which people could send their names to Mars on a microchip stored aboard Perseverance. After registering their names, participants received a digital ticket with details of the mission's launch and destination. There were 10,932,295 names submitted during the registration period. In addition, NASA announced in June 2019 that a student naming contest for the rover would be held in the fall of 2019, with voting on nine finalist names held in January 2020. Perseverance was announced to be the winning name on March 5, 2020. In May 2020, NASA attached a small aluminum plate to Perseverance to commemorate the impact of the COVID-19 pandemic and pay "tribute to the perseverance of healthcare workers around the world". The COVID-19 Perseverance Plate features planet Earth above the Rod of Asclepius, with a line showing the trajectory of the Mars 2020 spacecraft departing Earth. On February 22, 2021, NASA released uninterrupted footage of the landing process of Mars 2020 from parachute deployment to touchdown in a livestream broadcast. Upon release of this footage, engineers revealed that the rover's parachute contained a puzzle; Internet users had solved it within six hours. The parachute's pattern was based on binary code and translated to the motto of JPL (Dare Mighty Things) and the coordinates of its headquarters. Irregular patterns are frequently used on spacecraft parachutes to better determine the performance of specific parts of the parachute. A small piece of the wing covering from the Wright brothers' 1903 Wright Flyer is attached to a cable underneath Ingenuity's solar panel. NASA scientist Swati Mohan delivered the news of the successful landing. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-ft-openai-broadcom-133] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Radius] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-107] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Switch_statement] | [TOKENS: 1526] |
Contents Switch statement In computer programming, a switch statement is a selection control flow mechanism that changes execution control based on the value of an expression (i.e. evaluation of a variable). A switch statement is similar to an if statement but instead of branching only on true or false, it branches on any number of values. Although the syntax varies by programming language, most imperative languages provide a statement with the semantics described here as the switch statement. Often denoted with the keyword switch, some languages use variations such as case, select, or inspect. Value Sometimes, use of a switch statement is considered superior to an equivalent series of if-then-else statements because it is: Elements Typically, a switch statement involves: Fall through Two main variations of the switch statement include unstructured which supports fall through and structured which does not. For a structured switch, as in Pascal-like languages, control jumps from the start of the switch statement to the selected case and at the end of the case, control jumps to the end of the switch statement. This behaves like an if–then–else conditional but supports branching on more than just true and false values. To allow multiple values to execute the same code (avoiding duplicate code), the syntax permits multiple values per case. An unstructured switch, as in C (and more generally languages influenced by Fortran's computed goto), acts like goto. Control branches from the start of the switch to a case section and then control continues until either a block exit statement or the end of the switch statement. When control branches to one case, but continues into the subsequent branch, the control flow is called fall through, and allows branching to the same code for multiple values. Fall through is prevented by ending a case with a keyword (i.e. break), but a common mistake is to accidentally omit the keyword, causing unintentional fall through and often a bug. Therefore, many consider this language feature to be dangerous, and often fall through code results in a warning from a code quality tool such as lint. Some languages, such as JavaScript, retain fall through semantics, while others exclude or restrict it. Notably, in C# all blocks must be terminated with break or return unless the block is empty which limits fall through only for branching from multiple values. In some cases, languages provide optional fall through. For example, Perl does not fall through by default, but a case may explicitly do so using a continue keyword, preventing unintentional fall through. Similarly, Bash defaults to not falling through when terminated with ;;, but allows fall through with ;& or ;;& instead. An example of a switch statement that relies on fall through is Duff's device. Case expression evaluation Some languages allow for a complex case expression (not just a static value), allowing for more dynamic branching behavior. This prohibits certain compiler optimizations, so is more common in dynamic languages where flexibility is prioritized over performance. For example, in PHP and Ruby, a constant can be used as the control expression, and the first case statement that evaluates to match that constant is executed. In the following PHP code, the switch expression is simply the true value, so the first case expression that is true is the one selected. This feature is also useful for checking multiple variables against one value rather than one variable against many values. COBOL also supports this form via its EVALUATE statement. PL/I supports similar behavior by omitting the control expression, and the first WHEN expression that evaluates as true is executed. In Ruby, due to its handling of === equality, the case expression can be used to test a variable's class. For example: Result value Some languages support evaluating a switch statement to a value. The case expression is supported by languages dating at least as far back as ALGOL-W. In ALGOL-W, an integer expression was evaluated, which then evaluated the desired expression from a list of expressions: Other languages supporting the case expression include SQL, Standard ML, Haskell, Common LISP, and Oxygene. The switch expression (introduced in Java SE 12) evaluates to a value. There is also a new form of case label, case L-> where the right-hand-side is a single expression. This also prevents fall through and requires that cases are exhaustive. In Java SE 13 the yield statement is introduced, and in Java SE 14 switch expressions become a standard language feature. For example: Ruby also supports these semantics. For example: Exception handling A number of languages implement a form of switch statement in exception handling, where if an exception is raised in a block, a separate branch is chosen, depending on the exception. In some cases a default branch, if no exception is raised, is also present. An early example is Modula-3, which use the TRY...EXCEPT syntax, where each EXCEPT defines a case. This is also found in Delphi, Scala, and Visual Basic .NET. Examples The following code is a switch statement in C. If age is 1, it outputs "You're one.". If age is 3, it outputs "You're three. You're three or four.". Python (starting with 3.10.6) supports the match and case keywords. It doesn't allow fall through. Unlike if statement conditions, the or keyword cannot be used to differentiate between cases. case _ is equivalent to default in C. The following is an example in Pascal: In the Oxygene dialect of Pascal, a switch statement can be used as an expression: The following is an example in Shell script: A switch statement in assembly language: Alternatives Some alternatives to using a switch statement include: History In his 1952 text Introduction to Metamathematics, Stephen Kleene formally proves that the case function (the if-then-else function being its simplest form) is a primitive recursive function, where he defines the notion "definition by cases" in the following manner: "#F. The function φ defined thus where Q1 , ... , Qm are mutually exclusive predicates (or φ(x1 , ... , xn) shall have the value given by the first clause which applies) is primitive recursive in φ1, ..., φm+1, Q1, ..., Qm+1. — Stephen Kleene, Kleene provides a proof of this in terms of the Boolean-like recursive functions "sign-of" sg( ) and "not sign of" ~sg( ) (Kleene 1952:222-223); the first returns 1 if its input is positive and −1 if its input is negative. Boolos-Burgess-Jeffrey make the additional observation that "definition by cases" must be both mutually exclusive and collectively exhaustive. They too offer a proof of the primitive recursiveness of this function (Boolos-Burgess-Jeffrey 2002:74-75). The if-then-else is the basis of the McCarthy formalism: its usage replaces both primitive recursion and the mu-operator. The earliest Fortran compilers supported the computed goto statement for multi-way branching. Early ALGOL compilers supported a SWITCH data type which contains a list of "designational expressions". A goto statement could reference a switch variable and, by providing an index, branch to the desired destination. With experience it was realized that a more formal multi-way construct, with single point of entrance and exit, was needed. Languages such as BCPL, ALGOL-W, and ALGOL-68 introduced forms of this construct which have survived through modern languages. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_exchange_theory] | [TOKENS: 7609] |
Contents Social exchange theory 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Social exchange theory is a sociological and psychological theory that explains how people behave in relationships by using cost-benefit analysis to determine risks and benefits, expecting that what they give will lead to a fair return, and treating social relationships like economic exchanges in which each person controls things the other values and decides whether to continue the relationship based on how beneficial and fair the exchange feels over time. Social exchange theory can be applied to a wide range of relationships, including romantic partnerships, friendships, family dynamics, professional relationships and other social exchanges. An example can be as simple as exchanging words with a customer at the cash register. In each context individuals are thought to evaluate the rewards and costs that are associated with that particular relationship. This can influence decisions regarding maintaining, deepening or ending the interaction or relationship. The Social exchange theory suggests that people will typically end something if the costs outweigh the rewards, especially if their efforts are not returned. The most comprehensive social exchange theories are those of the American social psychologists John W. Thibaut (1917–1986) and Harold H. Kelley (1921–2003), the American sociologists George C. Homans (1910–1989), Peter M. Blau (1918–2002), Richard Marc Emerson (1925 –1982), and Claude Lévi-Strauss (1908–2009). Homans defined social exchange as the exchange of activity, tangible or intangible, and more or less rewarding or costing between at least two persons. After Homans founded the theory, other theorists continued to write about it, particularly Peter M. Blau and Richard M. Emerson, who in addition to Homans are generally thought of as the major developers of the exchange perspective within sociology. Homans' work emphasized the individual behavior of actors in interaction with one another. Although there are various modes of exchange, Homans centered his studies on dyadic exchange. John Thibaut and Harold Kelley are recognized for focusing their studies within the theory on the psychological concepts, the dyad and small group. Lévi-Strauss is recognized for contributing to the emergence of this theoretical perspective from his work on anthropology focused on systems of generalized exchange, such as kinship systems and gift exchange. Thibaut and Kelley Thibaut and Kelley based their theory on small groups related with dyadic relationships. They used the reward-cost matrices from game theory and discovered some clues of individuals' interdependence such as the power of a party over each other, also known as the "correspondence" versus "noncorrespondence" of outcomes. Additionally, they suggest that an individual can unilaterally affect her or his own outcomes in a relationship through chosen behaviors. They could predict the possible course of a social interaction through the analysis of aspects of power in an encounter. They also experimented on how the outcomes received in a relationship could define a person's attractions to relationships. Homans The foundation of the social exchange theory was first introduced by George C. Homans in 1958 based on his work "Social Behavior as Exchange", where he applied principles of behavior psychology and sociology to social interactions. Homans expanded his research in 1961 through "elementary forms of social behavior. Homans based his theory on concepts that include equilibration, expectancy, and a distributive justice in dyadic exchanges. Using this framework, he explained how people interact in small groups, showing that the rewards that they get are usually based on how much effort and resources that they contribute. Homans summarized his system with three main propositions: success, stimulus and deprivation-satiation propositions, described below. Blau Blau's theory is very similar to Homans'. However, he uses more economics terms and it is based principally on emergent social structure in social exchange patterns in small groups. His theory analyzes the development of exchange theory in economics without emphasizing on the psychological assumptions. He contributed to the idea of distinguishing between social and economic exchanges and exchange and power. The goal of his theory was to identify complex and simple processes without ignoring emergent properties. Blau's utilitarian focus encouraged the theorist to look forward, as in what they anticipated the reward would be in regards to their next social interaction. Blau felt that if individuals focused too much on the psychological concepts within the theory, they would refrain from learning the developing aspects of social exchange. Blau emphasized technical economic analysis whereas Homans concentrated more on the psychology of instrumental behavior. Emerson Emerson was inspired by Homans and Blau's ideas. He focused on the interaction and relationship between individuals and parties. His view of social exchange theory emphasizes the resource availability, power, and dependence as primary dynamics. He thought that relations were organized in different manners, and they could differ depending on the type and amount of the resources exchanged. He poses the idea that power and dependence are the main aspects that define a relationship. According to Emerson, Exchange is not a theory, but a framework from which other theories can converge and be compared to structural functionalism. Emerson's perspective was similar to Blau's since they both focused on the relationship power had with the exchange process. Emerson says that social exchange theory is an approach in sociology that is described for simplicity as an economic analysis of noneconomic social situations. Exchange theory brings a quasi-economic form of analysis into those situations. Lévi-Strauss Strauss was a social exchange theorist in the context of anthropology. He is recognized for contributing to the emergence of this theoretical perspective from his work on anthropology focused on systems of generalized exchange, such as kinship systems and gift exchange. He based his kinship systems on Mauss's investigation. As it works in the form of indirect reciprocities, Levi-Strauss suggested the concept of generalized exchange. Self-interest and interdependence Self-interest and interdependence are central properties of social exchange. These are the basic forms of interaction when two or more actors have something of value to each other, and they have to decide whether to exchange and in what amounts. Homans uses the concepts of individualism to explain exchange processes. To him, the meaning of individual self-interest is a combination of economic and psychological needs.[page needed] Fulfilling self-interest is often common within the economic realm of the social exchange theory where competition and greed can be common. In social exchange, self-interest is not a negative thing; rather, when self-interest is recognized, it will act as the guiding force of interpersonal relationships for the advancement of both parties' self-interest"—Michael Roloff (1981) Thibaut and Kelley see the mutual interdependence of persons as the central problem for the study of social behavior. They developed a theoretical framework based on the interdependence of actors. They also highlighted social implications of different forms of interdependence such as reciprocal control. According to their interdependence definition, outcomes are based on a combination of parties' efforts and mutual and complementary arrangements. Basic concepts Social exchange theory views exchange as a social behavior that may result both in economic and social outcomes. Social exchange theory has been generally analyzed by comparing human interactions with the marketplace. The study of the theory from the microeconomics perspective is attributed to Blau. Under his perspective every individual is trying to maximize his wins. Blau stated that once this concept is understood, it is possible to observe social exchanges everywhere, not only in market relations, but also in other social relations like friendship. Social exchange process brings satisfaction when people receive fair returns for their expenditures. The major difference between social and economic exchange is the nature of the exchange between parties. Neoclassic economic theory views the actor as dealing not with another actor but with a market and environmental parameters, such as market price. Unlike economic exchange, the elements of social exchange are quite varied and cannot be reduced to a single quantitative exchange rate. According to Stafford, social exchanges involve a connection with another person; involve trust and not legal obligations; are more flexible; and rarely involve explicit bargaining. Simple social exchange models assume that rewards and costs drive relationship decisions. Both parties in a social exchange take responsibility for one another and depend on each other. The elements of relational life include: Costs are the elements of relational life that have negative value to a person, such as the effort put into a relationship and the negatives of a partner. (Costs can be time, money, effort etc.) Rewards are the elements of a relationship that have positive value. (Rewards can be sense of acceptance, support, and companionship etc.) As with everything dealing with the social exchange theory, it has as its outcome satisfaction and dependence of relationships. The social-exchange perspective argues that people calculate the overall worth of a particular relationship by subtracting its costs from the rewards it provides. If worth is a positive number, it is a positive relationship. On the contrary, a negative number indicates a negative relationship. The worth of a relationship influences its outcome, or whether people will continue with a relationship or terminate it. Positive relationships are expected to endure, whereas negative relationships will probably terminate. In a mutually beneficial exchange, each party supplies the wants of the other party at lower cost to self than the value of the resources the other party provides. In such a model, mutual relationship satisfaction ensures relationship stability. Homans based his theory on behaviorism to conclude that people pursue rewards to minimize costs. The "satisfactory-ness" of the rewards that a party gains from an exchange relationship is judged relative to some standard, which may vary from party to party. Summarized by Gouldner, the reciprocity norm states that a benefit should be returned and the one who gives the benefit should not be harmed. This is used to stabilize relationships and to identify egoism. This norm suggests independence in relationships and invite the individual to consider more than one's self-interest. Altman and D. Taylor introduced social penetration theory, which studies the nature and quality of social exchange and close bonds. It suggests that once the individuals start to give more of their resources to one another, relationships evolve progressively from exchanging superficial goods to other, more meaningful exchanges. It progresses to the point called "self-disclosure", where the individuals share innermost thoughts and feelings with one another. In this process, the individuals will compare their rewards with others' in relation to their costs. Equity can be defined as the balance between a person's inputs and outcomes on the job. Some examples of inputs can be qualifications, promotions, interest on the job and how hard one works. Some outcomes can be pay, fringe benefits, and power status. The individual will mainly expect an equitable input-outcome ratio. Inequity happens when the individual perceives an unbalanced ratio of their outcomes and other's outcomes. This can occur in a direct exchange of the two parties, or there can be a third party involved. An individual's point of view of equity or inequity can differ depending on the individual. Aging The basis of social exchange theory is to explain social change and stability as a process of negotiating exchanges between parties. These changes can occur over a person's life course through the various relationships, opportunities, and means of support. An example of this is the convoy model of support, this model uses concentric circles to describe relationships around an individual with the strongest relationships in the closet circle. As a person ages, these relationships form a convoy that moves along with the person and exchanges in support and assistance through different circumstances that occur. It also changes through the directionality of support given to and by the individual with the people within their support network. Within this model, there are different types of support (social support) a person can receive, those being intangible, tangible, instrumental, and informational. Intangible support can either be social or emotional and can be love, friendship and appreciation that comes with valuable relationships. Tangible support are physical gifts given to someone such as land, gifts, money, transportation, food, and completing chores. Instrumental support are services given to someone in a relationship. Finally, informational support is the delivering of information that is helpful to an individual. Theoretical propositions Ivan Nye came up with twelve theoretical propositions that aid in understanding the exchange theory. In his article published in 1978, Nye originally proposed seven propositions that were common in all types of relationship. A few years later he would expand the propositions to a total of twelve. The first five propositions listed are classified as general propositions and are substance free-meaning, the propositions themselves can stand alone within the theory. Proposition number six has been identified by scholars as a notion that there is a general assumption of a need for social approval as a reward and can therefore act as a drive force behind actions. Proposition seven will only work if the individual has the freedom to be excluded from outside factors while in a social exchange relationship. The twelfth and final proposition is directed towards the way our society has a heightened value placed on monetary funds. Even though Homans took an individualistic approach, a major goal of his work was to explicate the micro-foundations of social structures and social exchange. By studying such forms of behavior he hoped to illuminate the informal sub-institutional bases of more complex social behavior, typically more formal and often institutionalized. According to Homans, social structures emerge from elementary forms of behavior. His vision of the underpinnings of social structure and institutional forms is linked to the actions of individuals, for example to their responses to rewarding and punishment circumstances. Homans developed five key propositions that assist in structuring individuals' behaviors based on rewards and costs. This set of theoretical ideas represents the core of Homans's version of social exchange theory. Based on economics, Frazer's theory about social exchange emphasizes the importance of power and status differentiations in social exchange. Frazer's theory had a particular interest in the cross-cousin marriage. With his Kula exchange, Malinowski drew a sharp differentiation between economic exchange and social exchange. Using his Kula exchange, Malinowski states that the motives of exchange can be mainly social and psychological. Mauss's theory tries to identify the role played by morality and religion in the social exchange. Mauss argues the exchange found in the society is influenced by social behaviors, while morality and religion influence all aspects of life. Bohannan focuses his theory on economic problems such as multi-centrism, and modes of exchange. He contributed to the social exchange theory finding the role and function of markets in tribal subsistence economies, makes a distinction of economic redistribution and market exchange from social relationships. He proposes three principles to create a new idea for socioeconomic change, transforming traditional economies, and political economic development. These principles are: reciprocity, redistribution and marketing. He presents the idea that the economy is a category of behavior instead of just a simple category of culture. Assumptions Social exchange theory is not one theory but a frame of reference within which many theories can speak to another, whether in argument or mutual support. All these theories are built upon several assumptions about human nature and the nature of relationships. Thibaut and Kelley have based their theory on two conceptualizations: one that focuses on the nature of individuals and one that describes the relationships between two people. Thus, the assumptions they make also fall into these categories. The assumptions that social exchange theory makes about human nature include the following: The assumptions social exchange theory makes about the nature of relationships include the following: Social systems result from human activity and function as structures designed to organize, guide, and regulate human affairs. However, variations exist in how costs and benefits are weighed depending on the actors involved, as well as in the interpretation, adoption, enforcement, neglect, and application of norms and sanctions. Furthermore, regarding human nature, the prisoner's dilemma is a widely used example in game theory that attempts to illustrate why or how two individuals may not cooperate with each other, even if it is in their best interest to do so. It demonstrates that while cooperation would give the best outcome, people might nevertheless act selfishly. All relationships involve exchanges, and the balance of these exchanges is considered fair when they are equitable. Comparison levels Social exchange includes "both a notion of a relationship, and some notion of a shared obligation in which both parties perceive responsibilities to each other". John Thibaut and Harold Kelley proposed two comparison standards to differentiate between relationship satisfaction and relationship stability. This evaluation rests on two types of comparisons: comparison level and comparison level for alternative. According to Thibaut and Kelley, the comparison level (CL) is a standard representing what people feel they should receive in the way of rewards and costs from a particular relationship. An individual's comparison level can be considered the standard by which an outcome seems to satisfy the individual. The comparison level for alternative (CLalt) refers to "the lowest level of relational rewards a person is willing to accept given available rewards from alternative relationships or being alone". In other words, when using this evaluation tool, an individual will consider other alternative payoffs or rewards outside of the current relationship or exchange. CLalt provides a measure of stability rather than satisfaction. If people see no alternative and fear being alone more than being in the relationship, social exchange theory predicts they will stay. Modes of exchange According to Kelley and Thibaut, people engage in behavioral sequence, or a series of actions designed to achieve their goal. This is congruent with their assumption that human beings are rational. When people engage in these behavioral sequences, they are dependent to some extent on their relational partner. In order for behavioral sequences to lead to social exchange, two conditions must be achieved: "It must be oriented towards ends that can only be achieved through interaction with other persons, and it must seek to adapt means to further the achievement of these ends". The concept of reciprocity also derives from this pattern. The reciprocity principle refers to the mutual reinforcement by two parties of each other's actions.[page needed] The process begins when at least one participant makes a "move", and if the other reciprocates, new rounds of exchange initiate. Once the process is in motion, each consequence can create a self-reinforcing cycle. Even though the norm of reciprocity may be a universally accepted principle, the degree to which people and cultures apply this concept varies. Several definitions of power have been offered by exchange theorists. For instance, some theorists view power as distinct from exchanges, some view it as a kind of exchange and others believe power is a medium of exchange. However, the most useful definition of power is that proposed by Emerson, who developed a theory of power-dependence relations. According to this theory, the dependence a person has on another brings up the concept of power. Power differentiation affects social structures by causing inequalities between members of different groups, such as an individual having superiority over another. Power within the theory is governed by two variables : the structure of power in exchange networks and strategic use. Experimental data show that the position an actor occupies in a social exchange network determines relative dependence and therefore power. According to Thibaut and Kelley, there are two types of power: fate control and behavior control. Fate control is the ability to affect a partner's outcomes. Behavior control is the power to cause another's behavior to change by changing one's own behavior. People develop patterns of exchange to cope with power differentials and to deal with the costs associated with exercising power. These patterns describe behavioral rules or norms that indicate how people trade resources in an attempt to maximize rewards and minimize costs. Three different matrices have been described by Thibaut and Kelley to illustrate the patterns people develop. These are given matrix, the effective matrix and the dispositional matrix. There are three forms within these matrices: Reciprocity, Generalized Exchange, and Productive Exchange. In a direct exchange, reciprocation is confined to the two actors. One social actor provides value to another one and the other reciprocates. There are three different types of reciprocity: A generalized exchange involves indirect reciprocity between three or more individuals. For example, one person gives to another and the recipient responds by giving to another person other than the first person. Productive exchange means that both actors have to contribute for either one of them to benefit. Both people incur benefits and costs simultaneously. Another common form of exchange is negotiated exchange which focuses on the negotiation of rules in order for both parties to reach a beneficial agreement. Reciprocal exchanges and negotiated exchanges are often analyzed and compared to discover their essential differences. One major difference between the two exchanges is the level of risks associated with the exchange and the uncertainty these risks create (ref). Negotiated exchange can consist of binding and non-binding negotiations. When comparing the levels of risk within these exchanges, reciprocal exchange has the highest level of risk which in result produces the most uncertainty. An example of a risk that could occur during the reciprocal exchange is the factor that the second party could end up not returning the favor and completing the reciprocal exchange. Binding negotiated exchanges involve the least amount of risks which will result the individuals feeling low levels of uncertainty. Whereas non-binding negotiated exchanges and their level of risks and uncertainty fall in between the amount of risks associated with reciprocal and binding negotiated exchanges. Since there is not a binding agreement involved, one party involved in the exchange could decide to not cooperate with the agreement. Critiques Katherine Miller outlines several major objections to or problems with the social exchange theory as developed from early seminal works Recent scholars, Russell Cropanzano and Marie S. Mitchell discuss how one of the major issues within the social exchange theory is the lack of information within studies on the various exchange rules. They suggest that the Social Exchange Theory should include psychological and emotional exchanges which are less visible but just as important. Reciprocity is a major exchange rule discussed but, Cropanzano and Mitchell write that the theory would be better understood if more research programs discussed a variety of exchange rules such as altruism, group gain, status consistency and competition. Meeker points out that within the exchange process, each unit takes into account at least the following elements: reciprocity, rationality, altruism (social responsibility), group gain, status, consistency, and competition (rivalry). Rosenfeld (2005) has noted significant limitations to Social Exchange Theory and its application in the selection of mates/partners. Specifically, Rosenfeld looked at the limitations of interracial couples and the application of social exchange theory. His analysis suggest that in modern society, there is less of a gap between interracial partners education level, socioeconomic status, and social class level which in turn, makes the previously understood application of social exchange moot. Applications The most extensive application of social exchange has been in the area of interpersonal relationships. However, social exchange theory materializes in many different situations with the same idea of the exchange of resources. Self-Interest can encourage individuals to make decisions that will benefit themselves overall. Homans once summarized the theory by stating: Other applications that developed the idea of exchange include field of anthropology as evidenced in an article by Harumi Befu, which discusses cultural ideas and norms. Lévi-Strauss is considered as one of the major contributors to the anthropology of exchange. Within this field, self-interest, human sentiment and motivational process are not considered. Lévi–Strauss uses a collectivist approach to explain exchanges. To Lévi-Strauss, a social exchange is defined as a regulated form of behavior in the context of societal rules and norms. This contrasts with psychological studies of exchange in which behaviors are studied ignoring the culture. Social exchanges from the anthropological perspective have been analyzed using the gift-giving phenomena. The concept of reciprocity under this perspective states that individuals can directly reward his benefactor or another person in the social exchange process. Lévi-Strauss developed the theory of cousin marriage based on the pervasiveness of gift-giving in primitive societies. The basis of this theory is the distinction between restricted exchanges, which is only capable of connecting pairs of social groups, and generalize exchange, which integrates indefinite numbers of groups. Throughout the theory, one can also end up losing relationships that were already established because the feeling of no longer being beneficial. One feels as if there is not longer a need for a relationship or communication due to lack of rewards. Once this happens, the process of looking for new partners and resources occurs. This allows a continuation of networking. One may go through this process quite frequently. A study applied this theory to new media (online dating). The study discovers the different factors involved when an individual decides to establish an online relationship. Overall the study followed the social exchange theory's idea, "people are attracted to those who grant them rewards". Another example is Berg's study about development of friendship between roommates. The research found how social exchange processes changed during the year by measuring self disclosure. According to the study, the amount one person rewards another and the comparison levels for alternatives become the most important factors in determining liking and satisfaction. Auld, C. and Alan C. conducted a study to discover what processes occur and what is experienced during social leisure relationships. They use the concept of reciprocity to understand their findings. The study concluded that meeting new people is often given as a major reason for participation in leisure activities, and meeting new people may be conceptualized as an exercise of reciprocity. In this case, reciprocity is perceived as a starting mechanism for new social relationships because people are willing to be helped by others, expecting that the help will eventually be returned. A study conducted by Paul, G., called Exchange and access in field work tries to understand the relationships between the researchers and subjects. This study concludes that Bargaining helps to satisfy the more specific needs of the parties because greater risks are taken to obtain more information. This study also introduces the concept of trust (social sciences) to determine the duration of relationships. Patterns of interracial marriage have been explained using social exchange theory. Kalmijn suggests that ethnic status is offset against educational or financial resources. This process has been used to explain why there are more marriages between black men and white women than between white men and black women. This asymmetry in marriage patterns has been used to support the idea of a racial hierarchy. Lewis, however, explains that the same patterns of marriage can be accounted for in terms of simple facial attractiveness patterns of the different gender by race groupings. Recent changes have seen an increase in black women marrying white men and a decrease in raw prevalence of interracial marriages when it comes to black women. There has also been a shift in the concentration of interracial marriage from mostly being between those with low education levels to those with higher levels of education. Social exchange theory has served as a theoretical foundation to explain different situations in business practices. It has contributed to the study of organization-stakeholder relationships, supply network relationships, and relationship marketing. The investment model proposed by Caryl Rusbult is a useful version of social exchange theory. According to this model, investments serve to stabilize relationships. The greater the nontransferable investments a person has in a given relationship, the more stable the relationship is likely to be. The same investment concept is applied in relationship marketing. Databases are the major instrument to build differentiated relationships between organizations and customers. Through the information process, companies identify the customer's own individual needs. From this perspective, a client becomes an investment. If a customer decides to choose another competitor, the investment will be lost. When people find they have invested too much to quit a relationship or enterprise, they devote additional resources to the relationship to salvage their initial investment. Exchange has been a central research thrust in business-to-business relational exchange. According to a study conducted by Lambe, C. Jay, C. Michael Wittmann, and Robert E. Spekman, firms evaluate economic and social outcomes from each transaction and compare them to what they feel they deserve. Firms also look for additional benefits provided by other potential exchange partners. The initial transaction between companies is crucial to determining whether their relationship will expand, remain the same or will dissolve. Holmen and Pedersen note that social exchange theory has contributed to the understanding of "connected" business relationships between firms. A study conducted by A. Saks serves as an example to explain engagement of employees in organizations. This study uses one of the tenets of social exchange theory to explain that obligations are generated through a series of interactions between parties who are in a state of reciprocal interdependence. The research identified that when individuals receive economic and socioemotional resources from their organization, they feel obliged to respond in kind and repay the organization. This is a description of engagement as a two-way relationship between the employer and employee. One way for individuals to repay their organization is through their level of engagement. The more engaged the employee are to their work, the greater amounts of cognitive, emotional, and physical resources they will devote to perform their job duties. When the organization fails to provide economic or emotional resources, the employees are more likely to withdraw and disengage themselves from their roles. Another more recent study by M. van Houten which took place in institutions for vocational education shows how, in social exchange relationships between teachers, reciprocity and feelings of ownership, affection and interpersonal safety impact on individual professionals´ decisions on what to share with whom. Colleagues who never ´pay back´ and make actual exchange happen (that is, who consume rather than produce and share), risk being left out. The study also points out the possibility of ´negative rewards´: exchange of one's knowledge, materials or otherwise may enable someone else the misuse that what was shared and/or take credit somewhere in the team or organization. As such, interpersonal relationships and ´fair´ exchange appear important, as does some kind of mechanism for rewards and gratitude (possibly organization-wide), as these impact on individual professional discretion and the degree and success of exchange. Social exchange theory is a theoretical explanation for organizational citizenship behavior. This study examines a model of clear leadership and relational building between head and teachers as antecedents, and organizational citizenship behavior as a consequence of teacher–school exchange. Citizenship behavior can also be shown with employees and their employers. This is shown through organizational identification which plays an important role in organizational citizenship behavior. An employee's identification with their employer plays a significant role in supporting and promoting organized citizenship behavior, serving as a mediating mechanism with citizenship behaviors, perceived organizational justice, and organizational support based on both the social exchange and social identity theory. Understanding interpersonal disclosure in online social networking is an ideal application of social networking theory. Researchers have leveraged SET to explain self-disclosure in a cross-cultural context of French and British working professionals. They discover that reciprocation is the primary benefit of self-disclosure, whereas risk is the foundational cost of self-disclosure. They find that positive social influence to use an online community increases online community self-disclosure; reciprocity increases self-disclosure; online community trust increases self-disclosure; and privacy risk beliefs decrease self-disclosure. Meanwhile, a tendency toward collectivism increases self-disclosure. Similar research also leveraged SET to examine privacy concerns versus desire for interpersonal awareness in driving the use of self-disclosure technologies in the context of instant messaging. This study was also a cross-cultural study, but instead compared US and Chinese participants. Affect theory Traditionally actors in the social exchange theory are often viewed as individuals who are rational decision makers that weigh costs and rewards without emotion. The affect theory of social exchange complements social exchange theory by incorporating emotion as part of the exchange process. The affect theory developed by Lawler (2001), challenges this by showing that emotions play an integral role in social exchanges. The affect theory examines the structural conditions of exchange that produce emotions and feelings and then identifies how individuals attribute these emotions to different social units (exchange partners, groups, or networks). These attributions of emotion, in turn, dictate how strongly individuals feel attached to their partners or groups, which drives collectively oriented behavior and commitment to the relationship. When individuals experience positive emotions in group or partner-based exchanges it strengthens bonds and encourages group commitment. Most social exchange models have three basic assumptions in common: social behavior is based on exchanges, if an individual allows someone to receives a reward that person then feels the need to reciprocate due to social pressure. Additionally, individuals will try to minimize their cost while gaining the most from the reward. The affect theory of social exchange is based on assumptions that stem from social exchange theory and affect theory: Affect theory of social exchange shows how the conditions of exchanges promote interpersonal and group relationships through emotions and affective processes. The theoretical arguments center on the following five claims: Emotions produced by exchange are involuntary, internal responses Individuals experience emotions (general feelings of pleasantness or unpleasantness) depending on whether their exchange is successful. These emotions can be construed as a reward or punishment, and individuals can strive to repeat actions that reproduce positive emotions or avoid negative emotions. Individuals attempt to understand what in a social exchange situation produces emotions Individuals will use the exchange task to understand the source (partners, groups, or networks) of their emotions. Individuals are more likely to attribute their emotions to their exchange partners or groups when the task can only be completed with one or more partners, when the task requires interdependent (non-separable) contributions, and when there is a shared sense of responsibility for the success or failure of the exchange. The mode of exchange determines the features of the exchange task and influences the attribution of the emotion produced The mode of exchange (productive, negotiated, reciprocal, or generalized) provides a description of the exchange task. The task features are defined by the degree of interdependence (separability of tasks) and shared responsibility between partners to complete the task. These features influence the strength of the emotion felt. Productive exchanges are interdependent, and this high degree of non-separability generates the strongest emotions. Reciprocal exchanges are separable which reduces the perceptions of shared responsibility. The exchange produces little emotional response, but individuals instead express emotions in response to the asymmetrical transaction. Generalized exchanges do not occur directly, but interdependence is still high and coordination between partners is difficult. Because there is no direct emotional foundation, emotions produced are low. Negotiated exchanges may produce conflicting emotions due to the mixed-motive nature of negotiations; even when transactions are successful, individuals may feel like they had the ability to do better, creating emotional ambivalence. Overall, productive exchanges produce the strongest attributions of emotions, generalized (indirect) exchange the weakest, with negotiated and reciprocal exchanges in between. The attribution of emotions resulting from different exchange modes impact the solidarity felt with partners or groups The different types of exchange (productive, reciprocal, and generalized) also impact the solidarity or identification that an individual will feel with their exchange partners or group. The different exchange types help dictate the target of felt emotions and influences an individual's attachment. Affective attachment occurs when a social unit (partner or group) is the target of positive feelings from exchange; affective detachment (alienation) occurs when a social unit is the target of negative feelings from failure to exchange. Affective attachment increases solidarity. Similar to the attribution of emotion, productive exchange produces the strongest affective attachments, generalized exchange the weakest, and negotiated and reciprocal exchange are in between.[citation needed] One condition for how social (partner or group) attributions can increase solidarity is by reducing self-serving attributions of credit or blame for the success or failure of the exchange. When individuals have group attributions for positive emotions stemming from success, this eliminates any self-serving biases and enhances both pride in the self and gratitude to the partner. However, group attributions for negative emotions stemming from failure do not eliminate self-serving biases, resulting in more anger toward the partner or group than shame in the self.[citation needed] Lawler also proposes that the persistence (stability) and ability to control acts by the exchange partner (controllability) provide conditions for affective attachment by attributing credit or blame for the success or failure of the exchange. Following Weiner (1985) affect theory of social exchange extrapolates that the combinations of stability and uncontrollability elicit different emotions. In social exchange, social connections can be sources of stability and controllability. For example, if an exchange partner is perceived as a stable source of positive feelings, and the exchange partner has control in the acts that elicit those positive feelings, this will strengthen affective attachment. Therefore, affect theory of social exchange proposes that stable and controllable sources of positive feelings (i.e. pleasantness, pride, gratitude) will elicit affective attachments while stable and uncontrollable sources of negative feelings (i.e. unpleasantness, shame, anger) will elicit affective detachment.[citation needed] Through these emotional processes, networks can develop group properties Repeated exchanges allow a network to evolve into a group. Affect theory highlights the contributions of emotions in producing group properties. Successful interactions generate positive feelings for the involved individuals, which motivates them to interact with the same partners in the future. As exchanges repeat, the strong relationships become visible to other parties, making salient their role as a group and helping to generate a group identity that continues to bind the partners together in a network. Affect theory predicts that networks of negotiated and reciprocal exchange will tend to promote stronger relational ties within partners; productive or generalized exchange will promote stronger network or group-level ties.[citation needed] See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Arizona_State_University] | [TOKENS: 13756] |
Contents Arizona State University Arizona State University (Arizona State or ASU) is a public research university in Tempe, Arizona, United States. Founded in 1885 as Territorial Normal School by the 13th Arizona Territorial Legislature, the university is one of the largest public universities by enrollment in the United States. It was one of about 180 "normal schools" founded in the late 19th century to train teachers for the rapidly growing public common schools. Some closed, but most steadily expanded their role and became state colleges in the early 20th century, then state universities in the late 20th century. One of three universities governed by the Arizona Board of Regents, Arizona State University is a member of the Association of American Universities (AAU) and is classified among "R1: Doctoral Universities – Very High Research Activity". As of fall 2025[update], ASU has 160,051 students enrolled, with 81,541 students attending online, across its four campuses and four regional learning centers throughout Arizona. ASU offers more than 400 undergraduate degree programs from its 16 colleges and over 170 cross-discipline centers and institutes for students. It also offers more than 450 graduate degree and certificate programs. The Arizona State Sun Devils compete in 26 varsity-level sports in NCAA Division I as a member of the Big 12 Conference. Sun Devil teams have won 165 national championships, including 24 NCAA trophies. 179 Sun Devils have made Olympic teams, winning 60 Olympic medals: 25 gold, 12 silver and 23 bronze. As of fall 2024[update], ASU had 5,679 faculty members. This included 5 Nobel laureates, 11 MacArthur Fellows, 10 Pulitzer Prize winners, 11 National Academy of Engineering members, 26 National Academy of Sciences members, 28 American Academy of Arts and Sciences members, 41 Guggenheim fellows, 163 National Endowment for the Humanities fellows, and 289 Fulbright Program American Scholars. History Arizona State University was established as the Territorial Normal School at Tempe on March 12, 1885, when the 13th Arizona Territorial Legislature passed an act to create a normal school to train teachers for the Arizona Territory. The campus consisted of a single, four-room schoolhouse on a 20-acre plot largely donated by Tempe residents George and Martha Wilson. Classes started with 33 students on February 8, 1886. The curriculum evolved over the years and the name was changed several times; the institution was also known as Tempe Normal School of Arizona (1889–1903), Tempe Normal School (1903–1925), Tempe State Teachers College (1925–1929), Arizona State Teachers College (1929–1945), Arizona State College (1945–1958) and, by a 2–1 margin of the state's voters, Arizona State University in 1958. In 1923, the school stopped offering high school courses and added a high school diploma to the admissions requirements. In 1925, the school became the Tempe State Teachers College and offered four-year Bachelor of Education degrees as well as two-year teaching certificates. In 1929, the 9th Arizona State Legislature authorized Bachelor of Arts in Education degrees as well, and the school was renamed the Arizona State Teachers College. Under the 30-year tenure of president Arthur John Matthews (1900–1930), the school was given all-college student status. The first dormitories built in the state were constructed under his supervision in 1902. Of the 18 buildings constructed while Matthews was president, six are still in use. Matthews envisioned an "evergreen campus", with many shrubs brought to the campus, and implemented the planting of 110 Mexican Fan Palms on what is now known as Palm Walk, a century-old landmark of the Tempe campus. During the Great Depression, Ralph Waldo Swetman was hired to succeed President Matthews, coming to Arizona State Teachers College in 1930 from Humboldt State Teachers College where he had served as president. He served a three-year term, during which he focused on improving teacher-training programs. During his tenure, enrollment at the college doubled, topping the 1,000 mark for the first time. Matthews also conceived of a self-supported summer session at the school at Arizona State Teachers College, a first for the school. In 1933, Grady Gammage, then president of Arizona State Teachers College at Flagstaff, became president of Arizona State Teachers College at Tempe, beginning a tenure that would last for nearly 28 years, second only to Swetman's 30 years at the college's helm. Like President Arthur John Matthews before him, Gammage oversaw the construction of several buildings on the Tempe campus. He also guided the development of the university's graduate programs; the first Master of Arts in Education was awarded in 1938, the first Doctor of Education degree in 1954 and 10 non-teaching master's degrees were approved by the Arizona Board of Regents in 1956. During his presidency, the school's name was changed to Arizona State College in 1945, and finally to Arizona State University in 1958. At the time, two other names were considered: Tempe University and State University at Tempe. Among Gammage's greatest achievements in Tempe was the Frank Lloyd Wright-designed construction of what is Grady Gammage Memorial Auditorium/ASU Gammage. One of the university's hallmark buildings, ASU Gammage was completed in 1964, five years after the president's (and Wright's) death. Gammage was succeeded by Harold D. Richardson, who had served the school earlier in a variety of roles beginning in 1939, including director of graduate studies, college registrar, dean of instruction, dean of the College of Education and academic vice president. Although filling the role of acting president of the university for just nine months (Dec. 1959 to Sept. 1960), Richardson laid the groundwork for the future recruitment and appointment of well-credentialed research science faculty. By the 1960s, under G. Homer Durham, the university's 11th president, ASU began to expand its curriculum by establishing several new colleges and, in 1961, the Arizona Board of Regents authorized doctoral degree programs in six fields, including Doctor of Philosophy. By the end of his nine-year tenure, ASU had more than doubled enrollment, reporting 23,000 in 1969. The next three presidents—Harry K. Newburn (1969–71), John W. Schwada (1971–81) and J. Russell Nelson (1981–89), including and Interim President Richard Peck (1989)—led the university to increased academic stature, the establishment of the ASU West Valley campus in 1984 and its subsequent construction in 1986, a focus on computer-assisted learning and research, and rising enrollment. Under the leadership of Lattie F. Coor, president from 1990 to 2002, ASU grew through the creation of the Polytechnic campus and extended education sites. Increased commitment to diversity, quality in undergraduate education, research, and economic development occurred over his 12-year tenure. Part of Coor's legacy to the university was a successful fundraising campaign: through private donations, more than $500 million was invested in areas that would significantly impact the future of ASU. Among the campaign's achievements were the naming and endowing of Barrett, The Honors College, and the Herberger Institute for Design and the Arts; the creation of many new endowed faculty positions; and hundreds of new scholarships and fellowships. In 2002, Michael M. Crow became the university's 16th president. At his inauguration, he outlined his vision for transforming ASU into a "New American University"—one that would be open and inclusive, and set a goal for the university to meet Association of American Universities criteria and to become a member. Crow initiated the transformation of ASU into "One university in many places"—a single institution comprising several campuses, sharing students, faculty, staff and accreditation. Subsequent reorganizations combined academic departments, consolidated colleges and schools, and reduced staff and administration as the university expanded its West Valley and Polytechnic campuses. ASU's Downtown Phoenix campus was also expanded, with several colleges and schools relocating there. The university established learning centers throughout the state, including the ASU Colleges at Lake Havasu City and programs in Thatcher, Yuma, and Tucson. Students at these centers can choose from several ASU degree and certificate programs. During Crow's tenure, and aided by hundreds of millions of dollars in donations, ASU began a years-long research facility capital building effort that led to the establishment of the Biodesign Institute at Arizona State University, the Julie Ann Wrigley Global Institute of Sustainability, and several large interdisciplinary research buildings. Along with the research facilities, the university faculty was expanded, including the addition of five Nobel Laureates. Since 2002, the university's research expenditures have tripled and more than 1.5 million square feet of space has been added to the university's research facilities. The economic downturn that began in 2008 took a particularly hard toll on Arizona, resulting in large cuts to ASU's budget. In response to these cuts, ASU capped enrollment, closed some four dozen academic programs, combined academic departments, consolidated colleges and schools, and reduced university faculty, staff and administrators; with an economic recovery underway in 2011, however, the university continued its campaign to expand the West Valley and Polytechnic Campuses, and establish a low-cost, teaching-focused extension campus in Lake Havasu City. As of 2011, an article in Slate reported that, "the bottom line looks good", noting that: Since Crow's arrival, ASU's research funding has almost tripled to nearly $350 million. Degree production has increased by 45 percent. And thanks to an ambitious aid program, enrollment of students from Arizona families below poverty is up 647 percent. On May 1, 2014, ASU was listed as one of fifty-five higher education institutions under investigation by the Office of Civil Rights "for possible violations of federal law over the handling of sexual violence and harassment complaints" by Barack Obama's White House Task Force To Protect Students from Sexual Assault. The publicly announced investigation followed two Title IX suits. In July 2014, a group of at least nine registered and former students who alleged they were harassed or assaulted asked the federal investigation be expanded. In August 2014 ASU president Michael Crow appointed a task force comprising faculty and staff, students, and members of the university police force to review the university's efforts to address sexual violence. Crow accepted the recommendations of the task force in November 2014. In 2015, the Thunderbird School of Global Management became the Thunderbird School of Global Management at ASU. Partnerships for education and research with Mayo Clinic established collaborative degree programs in health care and law, and shared administrator positions, laboratories and classes at the Mayo Clinic Arizona campus. The Beus Center for Law and Society, the new home of ASU's Sandra Day O'Connor College of Law, opened in fall 2016 on the Downtown Phoenix campus, relocating faculty and students from the Tempe campus to the state capital. In September 2024, ASU announced several cuts in response to state budget cuts, including the closure of the Lake Havasu City campus, a reduction of the Arizona Teachers Academy and the addition of a "tuition surcharge". Organization and administration The Arizona Board of Regents (ABOR) governs Arizona State University as well as the state's other public universities: University of Arizona and Northern Arizona University. The board is composed of 12 members including 11 who are voting members, and one non-voting member. Members of the board include the state governor and superintendent of public instruction acting as ex-officio members, eight volunteer Regents members with eight-year terms who are appointed by the governor, and two student regents, each with two-year terms, and each serving a one-year term as non-voting apprentices. ABOR provides policy guidance to the state universities of Arizona. ASU has four campuses in metropolitan Phoenix, Arizona, including the Tempe campus in Tempe; the West Valley campus in Glendale; the Downtown Phoenix campus; and the Polytechnic campus in Mesa. ASU also offers courses and degrees through ASU Online and at the ASU Colleges at Lake Havasu City in western Arizona, and offers regional learning programs in Thatcher, Yuma and Tucson. The Arizona Board of Regents appoints and elects the president of the university, who is considered the institution's chief executive officer and the chief budget officer. The president executes measures enacted by the Board of Regents, controls the university's property, and acts as the university's official representative to the Board of Regents. The chief executive officer is assisted through the administration of the institution by the provost, vice presidents, deans, faculty, directors, department chairs, and other officers. The president also selects and appoints administrative officers and general counsels. The 16th ASU president is Michael M. Crow, who has served since July 1, 2002. Campuses and locations ASU has four campuses in the Phoenix metropolitan area and regional learning centers throughout Arizona, in addition to facilities located in Los Angeles, Washington, D.C., and Hawaii. Unlike most multi-campus institutions, ASU describes itself as "one university in many places", implying there is "not a system with separate campuses, and not one main campus with branch campuses". The university considers each campus "distinctive" and academically focused on certain aspects of the overall university mission. The Tempe campus is the university's research and graduate school center. Undergraduate studies on the Tempe campus are research-based programs that prepare students for graduate school, professional school, or employment. The Polytechnic campus is designed with an emphasis on professional and technological programs for direct workforce preparation. The Polytechnic campus is the site of many of the university's simulators and laboratories dedicated for project-based learning. The West Valley campus is focused on interdisciplinary degrees and the liberal arts, while maintaining professional programs with a direct impact on the community and society. The Downtown Phoenix campus focuses on direct urban and public programs such as nursing, public policy, criminal justice, mass communication, journalism, and law, as well as the Thunderbird School of Global Management. Valley Metro Rail connects the Tempe and Downtown Phoenix campuses, and inter-campus shuttles allow students and faculty to easily travel between the campuses. In addition to in-person classes, ASU Online, with its headquarters in Los Arcos Mall#SkySong in Scottsdale, provides online and extended education. In 2018, the Arizona Board of Regents reported that the ASU facilities inventory totaled more than 23 million gross square feet. ASU's Tempe campus is in downtown Tempe, Arizona, about eight miles (13 km) east of Phoenix. The campus is considered urban and is approximately 660 acres (2.7 km2) in size. It is arranged around broad pedestrian malls and is completely encompassed by an arboretum. The Tempe campus is also the largest of ASU's campuses, with 55,312 students enrolled as of fall 2025. The campus is considered to range from the streets Rural Road on the east to Mill Avenue on the west, and Apache Boulevard on the south to Rio Salado Parkway on the north. The Tempe campus is ASU's original campus, and Old Main, the oldest building on campus, still stands. Today's university and the Tempe campus were founded as the Territorial Normal School when first constructed, and was originally a teachers college. There are many notable landmarks on campus, including Grady Gammage Memorial Auditorium, designed by Frank Lloyd Wright; Palm Walk, which is lined by 111 palm trees; Charles Trumbull Hayden Library; the University Club building; Margaret Gisolo Dance Theatre; Arizona State University Art Museum; and University Bridge. Furthermore, the Tempe campus is home to Barrett, The Honors College. In addition, the campus has an extensive public art collection; It was named "the single most impressive venue for contemporary art in Arizona" by Art in America magazine. Against the northwest edge of campus is the Mill Avenue district (part of downtown Tempe), which has a college atmosphere that attracts many students to its restaurants and bars. Students also have Tempe Marketplace, a shopping, dining and entertainment center with an outdoor setting near the northeast border of the campus. The Tempe campus is also home to all of the university's athletic facilities. Established in 1984 by the Arizona legislature, the West Valley campus sits on 277.92 acres (1.1247 km2) in a suburban area of northwest Phoenix. The West Valley campus lies about 12 miles (19 km) northwest of Downtown Phoenix, and about 18 miles (29 km) northwest of the Tempe campus. The West Valley campus is designated as a Phoenix Point of Pride and is nearly completely powered by a solar array. The campus serves 5,299 students as of fall 2025 and offers more than 100 degree programs from the New College of Interdisciplinary Arts and Sciences, the Mary Lou Fulton Teachers College, W. P. Carey School of Business, College of Public Service and Community Solutions, College of Health Solutions, and the College of Nursing and Health Innovation. Founded in 1996 as "ASU East", the ASU Polytechnic campus serves 6,170 students as of fall 2025 and is home to more than 130 bachelor's, master's and doctoral degrees in professional, technical science, humanities, social science and pre-health programs through the W. P. Carey School of Business/Morrison School of Management and Agribusiness, Mary Lou Fulton Teachers College, Ira A. Fulton Schools of Engineering, and College of Integrative Sciences and Arts. The campus — a desert arboretum — includes outdoor learning labs and spaces as well as leading-edge simulators and indoor lab spaces to support teaching and research in various fields of study. The 600-acre (2.4 km2) campus is in southeast Mesa, Arizona, approximately 25 miles (40 km) southeast of the Tempe campus, and 33 miles (53 km) southeast of downtown Phoenix. The Polytechnic campus sits on the former Williams Air Force Base and is adjacent to the Phoenix-Mesa Gateway Airport and Chandler-Gilbert Community College (Williams campus). The Downtown Phoenix campus was established in 2006 on the north side of Downtown Phoenix. The campus has an urban design, with several large modern academic buildings intermingled with commercial and retail office buildings. In addition to the new buildings, the campus included the adaptive reuse of several existing structures, including a 1930s era Post Office that is on the National Register of Historic Places. Serving 10,769 students as of fall 2025, the campus houses the College of Health Solutions, College of Integrative Science and Arts, College of Nursing and Health Innovation, Watts College of Public Service & Community Solutions, Mary Lou Fulton Teachers College, and Walter Cronkite School of Journalism and Mass Communication. In 2013, the campus added the Sun Devil Fitness Center in conjunction with the original YMCA building. ASU's Sandra Day O'Connor College of Law relocated from Tempe to the Downtown Phoenix campus in 2016. ASU Online offers more than 150 undergraduate and graduate degree programs through an online platform. The degree programs delivered online hold the same accreditation as the university's traditional face-to-face programs. ASU Online is headquartered at ASU's SkySong campus in Scottsdale, Arizona. As of 2018[update], ASU Online was ranked in the Top 4 for Best Online Bachelor's Programs by U.S. News & World Report. Online students are taught by the same faculty and receive the same diploma as on-campus students. ASU online programs allow students to learn in highly interactive environments through student collaboration and through technological personalized learning environments. In April 2015, ASU Online announced a partnership with edX to form a one of a kind program called the Global Freshman Academy. The program is open to all potential students. The students do not need to submit a high school transcript or GPA to apply for the courses. As of spring 2017, more than 25,000 students were enrolled through ASU Online. In June 2014, ASU Online and Starbucks announced a partnership called the Starbucks College Achievement Plan. The Starbucks College Achievement Plan offers all benefits-eligible employees full-tuition coverage when they enroll in any one of ASU Online's undergraduate degree programs. In 2016, Mayo Clinic and ASU formed a new platform for health care education and research: the Mayo Clinic and Arizona State University Alliance for Health Care. Beginning in 2017, Mayo Clinic School of Medicine students in Phoenix and Scottsdale are among the first to earn a certificate in the Science of Health Care Delivery, with the option to earn a master's degree in the Science of Health Care Delivery through ASU. Following a nearly 15-year presence in Washington, D.C., through more minor means, ASU opened the Barrett and O'Connor Center in 2018 to solidify the university's contacts with the capital city. The center houses ASU's D.C.-based academic programs, including the Washington Bureau of the Walter Cronkite School of Journalism and Mass Communication, the Sandra Day O'Connor College of Law Rule of Law and Governance program, the Capital Scholars program, and the McCain Institute's Next Generation Leaders program, among many others. In addition to hosting classes and internships on-site, special lectures and seminars taught from the Barrett & O'Connor Washington Center are connected to classrooms in Arizona through video-conferencing technology. The Barrett and O'Connor center is located at 1800 I St NW, Washington, DC 20006, close to the White House. ASU operates its "California Center" in Los Angeles across two buildings: the former Herald Examiner Building (known as ASU California Center Broadway) and ASU California Center Grand, previously home to the Fashion Institute of Design & Merchandising. The center offers undergraduate and graduate degree programs, executive education, workshops and seminars. In 2022, ASU acquired a small nonprofit college, Columbia College Hollywood, and renamed it California College of ASU. In 2023, ASU reached an agreement with the for-profit Fashion Institute of Design & Merchandising to take over some of its academic programs, creating ASU FIDM. In response to demands for lower-cost public higher education in Arizona, ASU developed a small, undergraduate-only college in Lake Havasu City. ASU Colleges was teaching-focused and provided a selection of popular undergraduate majors at lower tuition rates than other Arizona research universities and a 15-to-1 student-to-faculty ratio.The campus closed in June 2025 in response to state budget cuts. Academics As of August 2022, ASU had a systemwide enrolled student population (both in-person and online) of 140,759, a 4% increase over the systemwide total in 2021. Out of that total, approximately 79,000 students were enrolled in-person at one of the ASU campuses, an increase of 3.2% from 2021. Just over 61,000 students were enrolled in ASU Online courses and programs as of August 2022, an increase of roughly 7% in online student enrollment from the previous year. According to the U.S. News & World Report, for the 2022–2023 academic year, ASU admitted 88% of all freshman applicants and classified the school's admissions in the "selective" category. The average high school GPA of incoming first-year students for the 2022–23 academic year was 3.54. Barrett, The Honors College is ranked among the top honors programs in the nation. Although there are no set minimum admissions criteria for Barrett College, the average GPA of Fall 2017 incoming freshmen was 3.78, with an average SAT score of 1380 and an average ACT score of 29. The Honors college has 7,236 students, with 719 National Merit Scholars. ASU enrolls 10,268 international students, 14.3% of the total student population. The international student body represents more than 150 nations. The Institute of International Education ranked ASU as the top public university in the U.S. for hosting international students in 2016–2017. In June 2022, Arizona State University was designated a Hispanic-serving institution (HSI) by the United States Department of Education in recognition of the fact that for the first time in the school's history, during the fall semester of 2021 Hispanic students comprised over 25% of the university's total undergraduate enrollment. ASU offers over 350 majors to undergraduate students, and more than 100 graduate programs leading to numerous masters and doctoral degrees in the liberal arts and sciences, design and arts, engineering, journalism, education, business, law, nursing, public policy, technology, and sustainability. These programs are divided into 16 colleges and schools that are spread across ASU's six campuses. ASU also offers the 4+1 accelerated program, which allows students in their senior year to attain their master's degree the following year. The 4+1 accelerated program is not associated with all majors; for example, in the Mary Lou Fulton Teachers College the 4+1 accelerated program only works with Education Exploratory majors. ASU uses a plus-minus grading system with highest cumulative GPA awarded of 4.0 (at time of graduation). Arizona State University is accredited by the Higher Learning Commission. ASU is one of only four universities in the country to offer a certificate in veterans studies. The 2025 U.S. News & World Report ratings ranked ASU tied for 117th among universities in the United States and tied for 192nd globally. It was also tied for 57th among public universities in the United States, and was ranked 1st among "most innovative schools", tied for 17th in "best undergraduate teaching", 166th in "best value schools", and tied for 173rd in "top performers on social mobility" among national universities in the U.S. The innovation ranking, new for 2016, was determined by a poll of top college officials nationwide asking them to name institutions "that are making the most innovative improvements in terms of curriculum, faculty, students, campus life, technology or facilities". ASU is ranked 49nd–58th in the U.S. and 151st–200th in the world among the top 1000 universities in the 2025 Academic Ranking of World Universities, and 65th U.S./196th in the world by the 2025 Center for World University Rankings. Money magazine ranked ASU 124th in the country out of 739 schools evaluated for its 2020 "Best Colleges for Your Money" edition. The Wall Street Journal ranks ASU 5th in the nation for producing the best-qualified graduates, determined by a nationwide poll of corporate recruiters. ASU's Walter Cronkite School of Journalism and Mass Communication has been named one of America's top 10 journalism schools by national publications and organizations for more than a decade. The rankings include: College Magazine (10th), Quality Education and Jobs (6th), and International Student (1st). ASU is also one of 250 global universities selected for the Emerging Group's 2025 Global Employability University Ranking and Survey (GEURS), and is ranked 41th in the world (14th in the U.S.) within this select group. For its efforts as a national leader in campus sustainability, ASU was named one of the top 6 "Cool Schools" by the Sierra Club in 2017, was named one of the Princeton Review's most sustainable schools in 2015 and earned an "A−" grade on the 2011 College Sustainability Green Report Card. ASU is classified among "R1: Doctoral Universities – Very High Research Activity". The university spent $673 million in fiscal year 2020, ranking it 43rd nationally. ASU is a NASA designated national space-grant institute and a member of the Universities Research Association. In 2023, it became a member of the Association of American Universities, an elite organization of 71 research universities in the U.S. and Canada. The university is ranked in the top 10 for NASA-funded research expenditures. The university has raised more than $999 million in external funding, and more than 180 companies based on ASU innovations have been launched through the university's exclusive intellectual property management company, Skysong Innovations. The U.S. National Academy of Inventors and the Intellectual Property Owners Association rank ASU in the top 10 nationally and No. 11 globally for U.S. patents awarded to universities in 2020, along with MIT, Stanford and Harvard. ASU jumped to 10th place from 17th in 2017, according to the U.S. National Academy of Inventors and the Intellectual Property Owners Association. Since its inception, Skysong Innovations has fostered the launch of more than 180 companies based on ASU innovations, and attracted more than $999 million in venture funding, including $96 million in fiscal year 2016 alone. In 2013, the Sweden-based University Business Incubator (UBI) Index, named ASU as one of the top universities in the world for business incubation, ranking 17th. UBI reviewed 550 universities and associated business incubators from around the world using an assessment framework that takes more than 50 performance indicators into consideration. As an example, one of ASU's spin-offs (Heliae Development, LLC) raised more than $28 million in venture capital in 2013 alone. In June 2016, ASU received the Entrepreneurial University Award from the Deshpande Foundation, a philanthropic organization that supports social entrepreneurship and innovation. The university's push to create various institutes has led to greater funding and an increase in the number of researchers in multiple fields. ASU Knowledge Enterprise (KE) advances research, innovation, strategic partnerships, entrepreneurship, economic development and international development. KE is led by Sally C. Morton. KE supports several interdisciplinary research institutes and initiatives. Other famed institutes at ASU are The Institute of Human Origins, L. William Seidman Research Institute (W. P. Carey School of Business), Learning Sciences Institute, Herberger Research Institute, and the Hispanic Research Center. The Biodesign Institute for instance, conducts research on issues such as biomedical and health care outcomes as part of a collaboration with Mayo Clinic to diagnose and treat diseases. The institute has attracted more than $760 million in external funding, filed 860 invention disclosures, nearly 200 patents, and generated 35 spinout companies based on its research. In the early months of the COVID-19 pandemic, Biodesign developed a rapid, saliva-based testing option for the university community, and partnered with the Arizona Department of Health Services to make the saliva-based COVID test available to the public. In October 2021, Biodesign announced their millionth test. The institute also is heavily involved in sustainability research, primarily through reuse of CO2 via biological feedback and various biomasses (e.g. algae) to synthesize clean biofuels. Heliae is a Biodesign Institute spin-off and much of its business centers on algal-derived, high value products. Furthermore, the institute is heavily involved in security research including technology that can detect biological and chemical changes in the air and water. The university has received more than $30 million in funding from the Department of Defense for adapting this technology for use in detecting the presence of biological and chemical weapons. Research conducted at the Biodesign Institute by ASU professor Charles Arntzen made possible the production of Ebola antibodies in specially modified tobacco plants that researchers at Mapp Biopharmaceutical used to create the Ebola therapeutic ZMapp. The treatment is credited with saving the lives of two aid workers. For his work, Arntzen was named the No. 1 honoree among Fast Company's annual "100 Most Creative People in Business" 2015 awards. World-renowned scholars have been integral to the successes of the institutes associated with the university. ASU students and researchers have been selected as Marshall, Truman, Rhodes, and Fulbright Scholars with the university ranking 1st overall in the U.S. for Fulbright Scholar awards to faculty and 5th overall for recipients of Fulbright U.S. Student awards in the 2015–2016 academic year. ASU faculty includes Nobel Laureates, Royal Society members, National Academy members, and members of the National Institutes of Health. ASU Professor Donald Johanson, who discovered the 3.18 million year old fossil hominid Lucy (Australopithecus) in Ethiopia, established the Institute of Human Origins (IHO) in 1981. The institute was established in Berkeley, California, and later moved to ASU in 1997. As one of the leading research organizations in the United States devoted to the science of human origins, IHO pursues a transdisciplinary strategy for field and analytical paleoanthropological research. The Herberger Institute Research Center supports the scholarly inquiry, applied research and creative activity of more than 400 faculty and nearly 5,000 students. The renowned ASU Art Museum, Herberger Institute Community Programs, urban design, and other outreach and initiatives in the arts community round out the research and creative activities of the Herberger Institute. Among well known professors within the Herberger Institute is Johnny Saldaña of the School of Theatre and Film. Saldaña received the 1996 Distinguished Book Award and the prestigious Judith Kase Cooper Honorary Research Award, both from the American Alliance for Theatre Education (AATE). The Julie Ann Wrigley Global Institute of Sustainability is the center of ASU's initiatives focusing on practical solutions to environmental, economic, and social challenges. The institute has partnered with various cities, universities, and organizations from around the world to address issues affecting the global community. ASU is also involved with NASA in the field of space exploration. To meet the needs of NASA programs, ASU built the LEED Gold Certified, 298,000-square-foot Interdisciplinary Science and Technology Building IV (ISTB 4) at a cost of $110 million in 2012. The building includes space for the School of Earth and Space Exploration (SESE) and includes labs and other facilities for the Ira A. Fulton Schools of Engineering. One of the main projects at ISTB 4 includes the OSIRIS-REx Thermal Emission Spectrometer (OTES). Although ASU built the spectrometers aboard the Martian rovers Spirit and Opportunity, OTES will be the first major scientific instrument completely designed and built at ASU for a NASA space mission. Phil Christensen, the principal investigator for the Mars Global Surveyor Thermal Emission Spectrometer (TES), is a Regents' Professor at ASU. He also serves as the principal investigator for the Mars Odyssey THEMIS instruments, as well as co-investigator for the Mars Exploration Rovers. ASU scientists are responsible for the Mini-TES instruments aboard the Mars Exploration Rovers. The Buseck Center for Meteorite Studies, which is home to rare Martian meteorites and exotic fragments from space, and the Mars Space Flight Facility are on ASU's Tempe campus. In 2017, Lindy Elkins-Tanton of ASU was selected by NASA to lead a deep space mission to Psyche, a metal asteroid believed to be a former planetary core. The $450 million project is the first NASA mission led by the university. The Army Research Laboratory extended funding for the Arizona State University Flexible Display Center (FDC) in 2009 with a $50 million grant. The university has partnered with the Pentagon on such endeavors since 2004 with an initial $43.7 million grant. In 2012, researchers at the center created the world's largest flexible full-color organic light-emitting diode (OLED), which at the time was 7.4 inches. The following year, the FEDC staff broke their own world record, producing a 14.7-inch version of the display. The technology delivers high-performance while remaining cost-effective during the manufacturing process. Vibrant colors, high switching speeds for video and reduced power consumption are some of the features the center has integrated into the technology. In 2012, ASU eliminated the need for specialized equipment and processing, thereby reducing costs compared to competitive approaches. The Luminosity Lab is a student-led research and development think tank located on the Tempe campus of ASU. It was founded in 2016 by Dr. Mark Naufel. Fifteen students from multiple disciplines were selected for the initial team. ASU's faculty and students are served by nine libraries across five campuses: Hayden Library, Noble Library, Music Library and Design and the Arts Library on the Tempe campus; Fletcher Library on the West campus; Downtown Phoenix campus library and Ross-Blakley Law Library at the Downtown Phoenix campus; Polytechnic campus library; and the Thunderbird Library at the Thunderbird campus. As of 2013[update], ASU's libraries held 4.5 million volumes. The Arizona State University library system is ranked the 34th largest research library in the United States and Canada, according to criteria established by the Association of Research Libraries that measures various aspects of quality and size of the collection. The university continues to grow its special collections, such as the recent addition of a privately held collection of manuscripts by poet Rubén Darío. Hayden Library is on Cady Mall in the center of the Tempe campus. It opened in 1966 and is the largest library facility at ASU. An expansion in 1989 created a subterranean entrance underneath Hayden Lawn and is attached to the above-ground portion of the original library. There are two floors underneath Hayden Lawn with a landmark known as the "Beacon of Knowledge" rising from the center. The underground library lights the beacon at night. More expansions were completed in 2013 and 2020. The 2013 capital improvement plan approved by the Arizona Board of Regents, incorporated a $35 million repurposing and renovation project for Hayden Library. The open air moat area that serves as an outdoor study space will be enclosed to increase indoor space for the library. Along with increasing space and renovating the facility, the front entrance of Hayden Library was rebuilt. Sustainability As of March 2014[update], ASU was the top institution of higher education in the United States for solar generating capacity. As of May 2016[update], the university generated over 24 megawatts (MW) of electricity from on-campus solar arrays. This was an increase over the June 2012 total of 15.3 MW. ASU has 88 solar photovoltaic (PV) installations containing 81,424 solar panels across four campuses and the ASU Research Park. An additional 29 MWdc solar installation was dedicated at Red Rock, Pinal County, Arizona, in January 2017, bringing the university's solar generating capacity to 50 MWdc. Six wind turbines installed on the roof of the Julie Ann Wrigley Global Institute of Sustainability building on the Tempe campus have operated since October 2008. Under normal conditions, the six turbines produce enough electricity to power approximately 36 computers. In 2021, ASU researchers installed a passive radiative cooling film to local Tempe bus shelters to cool temperatures during the daytime by radiating heat to space with zero energy use. The film was produced by 3M and cooled shelter temperatures by 4 °C. It was one of the first applications of the cooling film in the country. ASU's School of Sustainability was the first school in the United States to introduce degrees in the field of sustainability. ASU's School of Sustainability is part of the Wrigley Global Institute of Sustainability. The School was established in spring 2007 and began enrolling undergraduates in fall 2008. The school offers majors, minors, and a number of certificates in sustainability. ASU is also home to the Sustainability Consortium, which was founded by Jay Golden in 2009. The School of Sustainability has been essential in establishing the university as "a leader in the academics of sustainable business". The university is widely considered to be one of the most ambitious and principled organizations for embedding sustainable practices into its operating model. The university has embraced several challenging sustainability goals. Among the numerous benchmarks outlined in the university's prospectus, is the creation of a large recycling and composting operation that will eliminate 30% and divert 90% of waste from landfills. This endeavor will be aided by educating students about the benefits of avoiding overconsumption that contributes to excessive waste. Sustainability courses have been expanded to attain this goal and many of the university's individual colleges and schools have integrated such material into their lectures and courses. Second, ASU is on track to reduce its rate of water consumption by 50%. The university's most aggressive benchmark is to be the first, large research university to achieve carbon neutrality as it pertains to its Scope 1, 2 and non-transportation Scope 3 greenhouse gas (GHG) emissions. ASU's College of Integrative Sciences and Arts (CISA) offers degrees and certifications focused on sustainable horticulture, natural resource ecology, indoor farming, desert food production and wildlife management, through its College of Applied Sciences and Arts at ASU's Polytechnic campus. CISA's Burrowing Owl Conservation Project at the Polytechnic campus was noted as one of the distinctive features of ASU in The Sierra Club magazine's ranking of ASU as the top "cool school" for sustainability in 2021. CISA faculty at the Polytechnic campus in disciplines such as applied biological sciences, and technical communication and user experience, are involved in research and community outreach to promote sustainable use of resources and preservation of species and habitat. Vertical farming, indoor farming, and water conservation efforts are just a few of the sustainability initiatives being driven by CISA faculty. Traditions Gold is the oldest color associated with Arizona State University and dates back to 1896 when the school was named the Tempe Normal School. Maroon and white were later added to the color scheme in 1898. Gold signifies the "golden promise" of ASU. The promise includes every student receiving a valuable educational experience. Gold also signifies the sunshine Arizona is famous for; including the power of the sun and its influence on the climate and the economy. The first uniforms worn by athletes associated with the university were black and white when the "Normals" were the name of the athletic teams. The student section, known as The Inferno, wears gold on game days. Maroon signifies sacrifice and bravery while white represents the balance of negativity and positivity. As it is in the city of Tempe, Arizona, the school's colors adorn the neighboring buildings during big game days and festive events. Sparky the Sun Devil is the mascot of Arizona State University and was named by vote of the student body on November 8, 1946. Sparky often travels with the team across the country and has been at every football bowl game in which the university has participated. The university's mascot is not to be confused with the athletics department's logo, the Pitchfork or hand gesture used by those associated with the university. The new logo is used on various sport facilities, uniforms and athletics documents. Arizona State Teacher's College had a different mascot and the sports teams were known as the Owls and later, the Bulldogs. When the school was first established, the Tempe Normal School's teams were simply known as the Normals. Sparky is visible on the sidelines of every home game played in Sun Devil Stadium or other ASU athletic facilities. His routine at football games includes pushups after every touchdown scored by the Sun Devils. He is aided by Sparky's Crew, male yell leaders that must meet physical requirements to participate as members. The female members are known as the Spirit Squad and are categorized into a dance line and spirit line. They are the official squad that represents ASU. The spirit squad competes every year at the ESPN Universal Dance Association (UDA) College Nationals in the Jazz and Hip-Hop categories. They were chosen by the UDA to represent the US at the World Dance Championship 2013 in the Jazz category. A letter has existed on the slope of the mountain since 1918. A "T" followed by an "N" were the first letters to grace the landmark. Tempe Butte, home to "A" Mountain, has had the "A" installed on the slope of its south face since 1938 and is visible from campus just to the south. The original "A" was destroyed by vandals in 1952 with pipe bombs, and a new "A", constructed of reinforced concrete, was built in 1955. The vandals were never identified, but many speculate the conspirators were students from the rival in-state university (University of Arizona). Many ancient Hohokam petroglyphs were destroyed by the bomb; nevertheless, many of these archeological sites around the mountain remain. There are many traditions surrounding "A" Mountain, including a revived "guarding of the 'A'" in which students camp on the mountainside before games with rival schools. "Echo from the Buttes" is a tradition in which incoming freshmen paint the letter white during orientation week; it is repainted gold before the first football game of the season. The practice dated back to the 1930s and grew in popularity, with thousands of students going up to paint the "A" every year. The Lantern Walk is one of the oldest traditions at ASU and dates back to 1917. It is considered one of ASU's "most cherished" traditions and is an occasion used to mark the work of those associated with ASU throughout history. Anyone associated with ASU is free to participate in the event, including students, alumni, faculty, employees, and friends. This differs slightly from the original tradition in which the seniors would carry lanterns up "A" Mountain followed by the freshman. The senior class president would describe ASU's traditions and the freshman would repeat an oath of allegiance to the university. It was described as a tradition of "good will between the classes" and a way of ensuring new students would continue the university's traditions with honor. In modern times, the participants walk through campus and follow a path up to "A" Mountain to "light up" Tempe. Keynote speakers, performances, and other events are used to mark the occasion. The night is culminated with a fireworks display. The Lantern Walk was held after the Spring Semester (June) but is now held the week before Homecoming, a tradition that dates to 1924 at ASU. It is held in the fall and in conjunction with a football game. In 2012, Arizona State University reintroduced the tradition of ringing a bell after each win for the football team. The ROTC cadets associated with the university transport the bell to various events and ring it after Sun Devil victories. The first Victory Bell, in various forms, was used in the 1930s but the tradition faded in the 1970s when the bell was removed from Memorial Union for renovations. The bell cracked and was no longer capable of ringing. That bell is on the southeast corner of Sun Devil Stadium, near the entrance to the student section. That bell, given to the university in the late 1960s, is painted gold and is a campus landmark. The Arizona State University Sun Devil Marching Band, created in 1915 and known as the "Pride of the Southwest", was the first of only two marching bands in the Pac-12 to receive the prestigious Sudler Trophy. The John Philip Sousa Foundation awarded the band the trophy in 1991. The Sun Devil Marching Band remains one of only 28 bands in the nation to have earned the designation. The band performs at every football game played in Sun Devil Stadium. In addition, the Sun Devil Marching Band has made appearances in the Fiesta Bowl, the Rose Bowl, the Holiday Bowl, and the Super Bowl XLII, in addition to many others. Smaller ensembles of band members perform at other sport venues including basketball games at Wells Fargo Arena and baseball games. The Devil Walk is held in Wells Fargo Arena by the football team and involves a more formal introduction of the players to the community; a new approach to the tradition added in 2012 with the arrival of head coach Todd Graham. It begins 2 hours and 15 minutes prior to the game and allows the players to establish rapport with the fans. The walk ends as the team passes the band and fans lined along the path to Sun Devil Stadium. The walk was discontinued when Graham was fired. However, in 2022, interim coach Shaun Aguano announced that the Sun Devil Walk is returning. The most recognizable songs played by the band are "Alma Mater" and ASU's fight songs titled "Maroon and Gold" and the "Al Davis Fight Song". "Alma Mater" was composed by former Music Professor and Director of Sun Devil Marching Band (then known as Bulldog Marching Band), Miles A. Dresskell, in 1937. "Maroon and Gold" was authored by former Director of Sun Devil Marching Band, Felix E. McKernan, in 1948. The "Al Davis Fight Song" (also known as "Go, Go Sun Devils" and "Arizona State University Fight Song") was composed by ASU alumnus Albert Oliver Davis in the 1940s without any lyrics. Recently lyrics were added to the song. The Curtain of Distraction is a tradition that appears at every men's and women's basketball game. The tradition started in 2013 in order to get fans to the games. In the second half of basketball games, a portable "curtain" opens up in front of the opponents shooting a free throw and students pop out of the curtain to try to distract the opponent. Some of the skits include an Elvis impersonator, people rubbing mayonnaise on their chest, and people wearing unicorn heads. In 2016, former Olympian Michael Phelps came out of the curtain wearing a Speedo during a game against Oregon State. ESPN estimated that distraction may give ASU a one-to-three point advantage. Student life Arizona State University has an active extracurricular involvement program. Located on the second floor of the Student Pavilion at the Tempe campus, Educational Outreach and Student Services (EOSS) provides opportunities for student involvement through clubs, sororities, fraternities, community service, leadership, student government, and co-curricular programming. The oldest student organization on campus is Devils' Advocates, the volunteer campus tour guide organization, which was founded in 1966 as a way to more competitively recruit National Merit Scholars. There are over 1,100 ASU alumni who can call themselves Advos. Changemaker Central is a student-run centralized resource hub for student involvement in social entrepreneurship, civic engagement, service-learning, and community service that catalyzes student-driven social change. Changemaker Central locations have opened on all campuses in fall 2011, providing flexible, creative workspaces for everyone in the ASU community. The project is entirely student run and advances ASU's institutional commitments to social embeddedness and entrepreneurship. The space allows students to meet, work and join new networks and collaborative enterprises while taking advantage of ASU's many resources and opportunities for engagement. Changemaker Central has signature programs, including Changemaker Challenge, that support students in their journey to become changemakers by creating communities of support around new solutions/ideas and increasing access to early stage seed funding. The Changemaker Challenge seeks undergraduate and graduate students from across the university who are dedicated to making a difference in our local and global communities through innovation. Students can win up to $10,000 to make their innovative project, prototype, venture or community partnership ideas happen. In addition to Changemaker Central, the Greek community (Greek Life) at Arizona State University has been important in binding students to the university, and providing social outlets. ASU is also home to one of the nation's first and fastest growing gay fraternities, Sigma Phi Beta, founded in 2003; considered a sign of the growing university's commitment to supporting diversity and inclusion. The second Eta chapter of Phrateres, a non-exclusive, non-profit social-service club, was installed here in 1958 and became inactive in the 1990s. There are multiple councils for Greek Life, including the Interfraternity Council (IFC), Multicultural Greek Council (MGC), National Association of Latino Fraternal Organizations (NALFO), National Pan-Hellenic Council (NPHC), Panhellenic Association (PHA), and the Professional Fraternity Council (PFC). The State Press is the university's independent, student-operated news publication. The State Press covers news and events on all four ASU campuses. Student editors and managers are solely responsible for the content of the State Press website. These publications are overseen by an independent board and guided by a professional adviser employed by the university. The Downtown Devil is a student-run news publication website for the Downtown Phoenix Campus, produced by students at the Walter Cronkite School of Journalism and Mass Communication. ASU has one student-run radio station, Blaze Radio. Blaze Radio is a completely student-run broadcast station owned and funded by the Cronkite School of Journalism. The station broadcasts using a 24-hour online stream on their official website. Blaze Radio plays music 24 hours a day and features daily student-hosted news, music, and sports specialty programs. Associated Students of Arizona State University (ASASU) is the student government at Arizona State University. It is composed of the Undergraduate Student Government and the Graduate & Professional Student Association (GPSA). Each ASU campus has a specific USG; USG Tempe (Tempe), USGD (Downtown), USG Polytechnic (Polytechnic) and USG West (West). Members and officers of ASASU are elected annually by the student body. The Residence Hall Association (RHA) of Arizona State University is the student government for every ASU student living on-campus. Each ASU campus has an RHA that operates independently. RHA's purpose is to improve the quality of residence hall life and provide a cohesive voice for the residents by addressing the concerns of the on-campus populations to university administrators and other campus organizations; providing cultural, diversity, educational, and social programming; establishing and working with individual community councils. Athletics Arizona State University's Division I athletic teams are called the Sun Devils, which is also the nickname used to refer to students and alumni of the university. They compete in the Big 12 Conference in 20 varsity sports. Historically, the university has highly performed in men's, women's, and mixed archery; men's, women's, and mixed badminton; women's golf; women's swimming and diving; baseball; and football. Arizona State University's NCAA Division I-A program competes in 9 varsity sports for men and 11 for women. ASU's athletic director is Ray Anderson, former executive vice president of football operations for the National Football League. Anderson replaced Steve Patterson, who was appointed to the position in 2012, replacing Lisa Love, the former Senior Associate Athletic Director at the University of Southern California. Love was responsible for the hiring of coaches Herb Sendek, the men's basketball coach, and Dennis Erickson, the men's football coach. Erickson was fired in 2011 and replaced by Todd Graham. In December 2017, ASU announced that Herm Edwards would replace Graham as the head football coach. The rival to Arizona State University is University of Arizona. ASU has won 24 national collegiate team championships in the following sports: baseball (5), men's golf (2), women's golf (8), men's gymnastics (1), softball (2), men's indoor track (1), women's indoor track (2), men's outdoor track (1), women's outdoor track (1), and wrestling (1). In September 2009, criticism over the seven-figure salaries earned by various coaches at Arizona's public universities (including ASU) prompted the Arizona Board of Regents to re-evaluate the salary and benefit policy for athletic staff. With the 2011 expansion of the Pac-12 Conference, a new $3 billion contract for revenue sharing among all the schools in the conference was established. With the infusion of funds, the salary issue and various athletic department budgeting issues at ASU were addressed. The Pac-12's new media contract with ESPN allowed ASU to hire a new coach in 2012. A new salary and bonus package (maximum bonus of $2.05 million) was instituted and is one of the most lucrative in the conference. ASU also plans to expand its athletic facilities with a public-private investment strategy to create an amateur sports district that can accommodate the Pan American Games and operate as an Olympic Training Center. The athletic district will include a $300 million renovation of Sun Devil Stadium that will include new football facilities. The press box and football offices in Sun Devil Stadium were remodeled in 2012. Arizona State Sun Devils football was founded in 1896 under coach Fred Irish. The team has played in the 2012 Fight Hunger Bowl, the 2011 Las Vegas bowl, the 2016 Cactus Bowl, and the 2007 Holiday Bowl. The Sun Devils played in the 1997 Rose Bowl and won the Rose Bowl in 1987. The team has appeared in the Fiesta Bowl in 1983, 1977, 1975, 1973, 1972, and 1971 winning 5 of 6. In 1970, and 1975, they were champions of the NCAA Division I FBS National Football Championship. The Sun Devils were Pac-12 Champions in 1986, 1996, and 2007. Altogether, the football team has 17 Conference Championships and has participated in a total of 29 bowl games as of the 2015–2016 season with a 14–14–1 record in those games. ASU Sun Devils Hockey competed with NCAA Division 1 schools for the first time in 2012, largely due to the success of the program. In 2016, they began as a full-time Division I team. Eight members of ASU's Women's Swimming and Diving Team were selected to the Pac-10 All-Academic Team on April 5, 2010. In addition, five member of ASU's Men's Swimming and Diving Team were selected to the Pac-10 All-Academic Team on April 6, 2010. In April 2015, Bobby Hurley was hired as the men's basketball coach, replacing Herb Sendek. Previously, Hurley was the head coach at the University at Buffalo for the UB Bulls as well as an assistant coach at Rhode Island and Wagner University. In 2015, Bob Bowman was hired as the head swim coach. Previously, Bowman trained Michael Phelps through his Olympic career. As of Fall 2015, ASU students, including those enrolled in online courses, may avail of a free ticket to all ASU athletic events upon presentation of a valid student ID and reserving one online through their ASU and Ticketmaster account. Tickets may be limited or not available in the 2020–2021 and 2021–2022 school years due to the COVID-19 pandemic. Alumni As of 2024[update], the Arizona State University Alumni Association has more than 640,000 members worldwide, 338,000 of whom live in Arizona. It is headquartered in Old Main on the Tempe campus. Prominent alumni in government and politics include three U.S. senators (Carl Hayden, Roger Jepsen and Kyrsten Sinema) and four governors of Arizona (Evan Mecham, Jane Dee Hull, Doug Ducey and Katie Hobbs), as well as ten U.S. representatives; former U.S. ambassador and Secretary of the Air Force Barbara Barrett; and three presidents of the Navajo Nation (Peterson Zah, Albert Hale and Joe Shirley Jr.). In business, alumni include Ira A. Fulton, founder of Fulton Homes and namesake of ASU's Ira A. Fulton Schools of Engineering; Kate Spade, namesake and cofounder of Kate Spade New York; and Kevin Warren, president of the Chicago Bears and former commissioner of the Big Ten Conference. Academics include Harriet Nembhard, Dean T. Kashiwagi, and Eduardo Obregón Pagán. Sun Devils have also made a mark on pop culture, with figures including Steve Allen, Jimmy Kimmel, sportscaster Al Michaels, and comedian and actor David Spade. Influential writers and novelists include Amanda Brown, author of Legally Blonde; academic and animal scientist Temple Grandin; and conservative author, commentator and popular historian Larry Schweikart, author of A Patriot's History of the United States. Six ASU alumni are enshrined in the Pro Football Hall of Fame: Eric Allen, Curley Culp, Mike Haynes, John Henry Johnson, Randall McDaniel and Charley Taylor. Silver Star recipient Pat Tillman, who played football at ASU from 1994 to 1997, left his National Football League career to enlist in the United States Army in the aftermath of the September 11 attacks. As of 2024[update], ASU is second among all NCAA universities with 117 alumni who have played in Major League Baseball and has the most inductees into the College Baseball Hall of Fame, with notable players including Barry Bonds, Reggie Jackson, Ian Kinsler and Dustin Pedroia. Thirty Sun Devils have played in the National Basketball Association, including Joe Caldwell, Ike Diogu, Lionel Hollins, James Harden, Eddie House, Fat Lever, Alton Lister and Byron Scott. Joey Daccord was the first ASU alumnus to play in the National Hockey League, while ASU has produced professional women's soccer players including Liz Bogus, Alexia Delgado and Jemma Purfield. ASU alumni golfers include major tournament winners Phil Mickelson and Jon Rahm. Wrestlers and mixed martial arts fighters include Zeke Jones, Anthony Robles and Cain Velasquez. More than 200 Sun Devil student-athletes have competed in the Olympic Games as of 2024[update], winning a total of 66 medals; notable Olympians from ASU include Melissa Belote, Herman Frazier, Ron Freeman, Jan Henne and Léon Marchand. Faculty ASU faculty have included former CNN host Aaron Brown, Academic Claude Olney, meta-analysis developer Gene V. Glass, feminist and author Gloria Feldt, physicist Paul Davies, and Pulitzer Prize winner and The Ants coauthor Bert Hölldobler. David Kilcullen, a counterinsurgency theorist, is a professor of practice. Donald Johanson, who discovered the 3.18 million year old fossil hominid Lucy (Australopithecus) in Ethiopia, is also a professor, as well as George Poste, Chief Scientist for the Complex Adaptive Systems Initiative. Former US senator Jeff Flake was appointed as a distinguished dean fellow on December 2, 2020. Nobel laureate faculty include Leland Hartwell, and Edward C. Prescott. On June 12, 2012, Elinor Ostrom, ASU's third Nobel laureate, died at the age of 78. ASU faculty's achievements as of 2020[update] include:[better source needed] Presidents The following persons served as president of Arizona State University: Table notes: Presidential visits Arizona State University has been visited by ten United States presidents. President Theodore Roosevelt was the first president to visit campus, speaking on the steps of Old Main on March 20, 1911, while in Arizona to dedicate the Roosevelt Dam. President Richard Nixon did not visit ASU as president, but visited Phoenix as president on October 31, 1970, at an event that included a performance by the Arizona State University Band, which President Nixon acknowledged. As part of President Nixon's remarks, he stated that, "when I am in Arizona, Arizona State is number one." President Lyndon B. Johnson spoke at ASU's Grady Gammage Memorial Auditorium on January 29, 1972, at a memorial service for ASU alumnus Senator Carl T. Hayden. Future president Gerald R. Ford debated Senator Albert Gore, Sr. at Grady Gammage Memorial Auditorium on April 28, 1968, and Ford returned to the same building as a former president to give a lecture on February 24, 1984. President Jimmy Carter visited Arizona PBS at ASU's Walter Cronkite School of Journalism and Mass Communication on July 31, 2015, to promote a memoir. Future president Ronald Reagan gave a political speech at the school's Memorial Union in 1957, and returned to campus as a former president on March 20, 1989, delivering his first ever post-presidential speech at ASU's Wells Fargo Arena. President George H. W. Bush gave a lecture at Wells Fargo Arena on May 5, 1998. President Bill Clinton became the first sitting president to visit ASU on October 31, 1996, speaking on the Grady Gammage Memorial Auditorium lawn. He returned to ASU in 2006, and in 2014, President Clinton, Hillary Clinton, and Chelsea Clinton came to campus to host the Clinton Global Initiative University. President George W. Bush became the second sitting president to visit the school's campus when he debated Senator John Kerry at the university's Grady Gammage Memorial Auditorium on October 13, 2004. President Barack Obama visited ASU as sitting president on May 13, 2009. President Obama delivered the commencement speech for the Spring 2009 Commencement Ceremony. President Obama had previously visited the school as a United States senator. President Donald Trump spoke at a campaign rally in Mullett Arena on October 24, 2024. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ky%C5%8Dgen] | [TOKENS: 3934] |
Contents Kyōgen Kyōgen (狂言; Japanese pronunciation: [kʲoː.ɡeꜜɴ, -ŋeꜜɴ]) is a form of traditional Japanese comic theater. It developed alongside Noh, was performed along with Noh as an intermission of sorts between Noh acts on the same stage, and retains close links to Noh in the modern day; therefore, it is sometimes designated Noh-kyōgen. Its contents are nevertheless not at all similar to the formal, symbolic, and solemn Noh theater; kyōgen is a comic form, and its primary goal is to make its audience laugh. Kyōgen together with Noh is part of Nōgaku theatre. Kyōgen is sometimes compared to the Italian comic form of commedia dell'arte, which developed in the early 17th century, and likewise features stock characters. It also has parallels with the Greek satyr play, a short, comical play performed between tragedies. History One of the oldest ancestors of kyogen is considered to be a comical mimicry, which was one of the arts constituting Sangaku (ja:散楽), and Sangaku was introduced to Japan from China in the Nara period in the 8th century. In the Heian period (794-1185), sangaku developed into sarugaku by merging with Japanese traditional performing arts such as dengaku, and in the Kamakura period (1185-1333), it was divided into Noh, which was a drama of serious singing and dancing, and kyogen, which was a comical speech and play. When Kan'ami and Zeami completed Noh in the style known today in the early Muromachi period (1333-1573) in the 14th century, Kyōgen was a simple and comical short play different from the style known today, and performers of kyōgen were under the control of a Noh troupe. In the late Muromachi period, kyōgen as a form of theater was developed and the Ōkura school was established by kyōgen performers. In the Edo period (1606-1868), Sagi school and Izumi school were established. Since the Tokugawa shogunate designated kyōgen and Noh as ceremonial arts in the Edo period, kyōgen performers of these three schools were employed by the Tokugawa shogunate, each daimyō (feudal lord) and the Imperial Court, and kyōgen also developed greatly. Kyōgen provided a major influence on the later development of kabuki theater. After the earlier, more ribald forms of kabuki had been outlawed in the mid-17th century, the government permitted the establishment of the new yarō-kabuki (men's kabuki) only on the grounds that it refrain from the previous kabuki forms' lewdness and instead model itself after kyōgen. Noh had been the official entertainment form of the Edo period, and was therefore subsidized by the government. Kyōgen, performed in conjunction with Noh, also received the patronage of the government and the upper class during this time. Following the Meiji Restoration, however, this support ceased. Without government support, Noh and kyōgen went into decline, as many Japanese citizens gravitated toward the more "modern" Western art forms. In 1879, however, then-former US President Ulysses S. Grant and his wife, while touring Japan, expressed an interest in the traditional art of Noh. They became the first Americans to witness Noh and kyōgen plays and are said to have enjoyed the performance. Their approval is believed to have sparked a revival of interest in these forms. In modern Japan, kyōgen is performed both separately and as a part of Noh. When performed as part of a Noh performance, kyōgen can take three forms: a separate (comic) kyōgen play, performed between two Noh plays (inter-Noh), which is known as honkyōgen (本狂言; actual kyōgen), as a (non-comic) scene within a Noh play (intra-Noh, between two scenes), which is known as aikyōgen (間狂言; in-between kyōgen, kyōgen interval), or as betsukyōgen (別狂言; special kyōgen). In aikyōgen, most often the main Noh actor (shite) leaves the stage and is replaced by a kyōgen actor (狂言方, kyōgen-kata), who then explains the play (for the benefit of the audience), though other forms are also possible – the aikyōgen happening at the start, or the kyōgen actor otherwise interacting with the Noh actors. As part of Noh, aikyōgen is not comic – the manner (movements, way of speech) and costume are serious and dramatic. However, the actor is dressed in a kyōgen outfit and uses kyōgen-style language and delivery (rather than Noh language and delivery) – meaning simpler, less archaic language, delivered closer to a speaking voice – and thus can generally be understood by the audience, hence the role in explaining the play. Thus, while the costume and delivery are kyōgen-style (kyōgen in form), the clothing will be more elegant and the delivery less playful than in separate, comic kyōgen. Before and after aikyōgen, the kyōgen actor waits (kneeling in seiza) at the kyogen seat (狂言座, kyōgen-za) at the end of the bridge (hashigakari), close to the stage. The traditions of kyōgen are maintained primarily by family groups, especially the Izumi school and Ōkura school. For a comprehensive list of plays, see List of Kyōgen plays. Elements Kyōgen plays are invariably brief – often about 10 minutes, as traditionally performed between acts of Noh – and often contain only two or three roles, which are often stock characters. Notable ones include Tarō kaja (太郎冠者; main servant, literally "firstborn son + servant"), Jirō kaja (次郎冠者; second servant, literally "second son + servant"), and the master (主人, shujin). Movements and dialogue in kyōgen are typically very exaggerated, making the action of the play easy to understand. Elements of slapstick or satire are present in most kyōgen plays. Some plays are parodies of actual Buddhist or Shinto religious rituals; others are shorter, more lively, simplified versions of Noh plays, many of which are derived from folktales. As with Noh, jo-ha-kyū is a fundamental principle, which is particularly relevant for movement. As with Noh and kabuki, all kyōgen actors, including those in female roles, are men. Female roles are indicated by a particular piece of attire, a binankazura (美男鬘) – a long white sash, wrapped around the head, with the ends hanging down the front of the body and tucked into the belt, like symbolic braids; at the two points (either side of the head) where the sash changes from being wrapped around to hanging down, the sash sticks up, like two small horns. Similarly, actors play roles regardless of age – an old man may play the role of Tarō kaja opposite a young man playing master, for instance. Outfits are generally kamishimo (Edo period outfit consisting of kataginu top and hakama pants), with the master (if present) generally wearing nagabakama (long, trailing pants). Actors in kyōgen, unlike those in Noh, typically do not wear masks, unless the role is that of an animal (such as a tanuki or kitsune), or that of a god. Consequently, the masks of kyōgen are less numerous in variety than Noh masks. Both masks and costumes are simpler than those characteristic of Noh. Few props are used, and minimal or no stage sets. As with Noh, a fan is a common accessory. The language in kyōgen depends on the period, but much of the classic repertoire is in Early Modern Japanese, reasonably analogous to Early Modern English (as in Shakespeare). The language is largely understandable to contemporary Japanese speakers, but sounds archaic, with pervasive use of the gozaru (ござる) form rather than the masu (ます) form that is now used (see copula: Japanese). For example, when acknowledging a command, Tarō kaja often replies with kashikomatte-gozaru (畏まってござる; "Yes sir!"), for which in modern Japanese one uses kashikomarimashita (畏まりました). Further, some of the words and nuances cannot be understood by modern audience (without notes), as in Shakespeare. This contrasts with Noh, where the language is more difficult and generally not understandable to a contemporary audience. There are numerous set patterns – stock phrases and associated gestures, such as kashikomatte-gozaru (with a bow) and Kore wa mazu nanto itasō. Iya! Itashiyō ga gozaru. "So first, what to do. Aha! There is a way to do it.", performed while bowing and cocking head (indicating thought), followed by standing up with a start on Iya! Plays often begin with set phrases such as Kore wa kono atari ni sumai-itasu mono de gozaru. "This is the person who resides in this place." and (if featuring Tarō kaja) often end with Tarō kaja running off the stage yelling Yaru-mai zo, yaru-mai zo! "I won't do it, I won't do it!". Lines are delivered in a characteristic rhythmic, sing-song voice, and generally quite loudly. Pace, pitch, and volume are all varied for emphasis and effect. As with Noh, which is performed on the same stage, and indeed many martial arts (such as kendo and aikido) actors move via suriashi (摺り足), sliding their feet, avoiding steps on the easily vibrated Noh stage. When walking, the body seeks to remain at the same level, without bobbing up or down. Plays also frequently feature stamping feet or otherwise hitting the ground (such as jumping) to take advantage of the stage. As with Noh, angle of gaze is important, and usually a flat gaze is used (avoiding looking down or up, which create a sad or fierce atmosphere, which is to be avoided). Characters usually face each other when speaking, but turn towards the audience when delivering a lengthy speech. Arms and legs are kept slightly bent. Unless involved in action, hands are kept on upper thighs, with fingers together and thumb tucked in – they move down to the sides of the knees when bowing. Kyōgen is performed to the accompaniment of music, especially the flute, drums, and gong. However, the emphasis of kyōgen is on dialogue and action, rather than on music or dance. Kyogen is generally performed on a Noh stage, as the stage is an important part of the play (the space, the reaction to stamps, the ease of sliding, etc.). It can, however, be performed in any space (particularly by amateur or younger performers), though if possible a Noh-like floor will be installed. Komai In addition to the kyōgen plays themselves, performances include short dances called komai (小舞; small dance). These are traditional dramatic dances (not comic), performed to a chanted accompaniment, and with varied themes. The movements are broadly similar to Noh dances. The often archaic language used in the lyrics and the chanted delivery means that these chants are often not understandable to a contemporary audience. Kyōgen today Today, kyōgen is performed and practiced regularly, both in major cities (especially Tokyo and Osaka) and throughout the country, and is featured on cultural television programs. In addition to the performances during Noh plays, it is also performed independently, generally in programs of three to five plays. New kyogen are written regularly, though few new plays enter the repertoire. Particularly significant is Susugigawa (濯ぎ川; The Washing River), written and directed by Tetsuji Takechi in 1953, during his post-Kabuki theater work. Based on a medieval French farce, this play became the first new kyōgen to enter the traditional repertoire in a century. In rare cases bilingual kyōgen or fusion of kyōgen with Western forms has been done. An early example is the group Mei-no-kai, consisting of kyōgen, Noh, and shingeki actors, who staged Beckett's Waiting for Godot in 1973; the kyōgen acting was best received. A notable example is the Noho Theatre group, based in Kyoto, under the direction of American Jonah Salz and primary acting by Akira Shigeyama. This group has performed a bilingual Japanese/English translation of Susugigawa termed The Henpecked Husband, together with works by Samuel Beckett, notably the mime Act Without Words I, performed by a kyōgen actor in Japanese theatrical style (first performed 1981). This latter features kyōgen movements and Japanese cultural adaptations – for example, the nameless character contemplates suicide not by holding scissors to his throat (as per stage directions), but to his stomach, as if contemplating hara-kiri. Unusually for a Beckett adaptation, which are usually strictly controlled by Beckett and his estate, this was presented to Beckett and met with his approval. The distinctive diction of kyōgen is also occasionally used in other media, with kyōgen actors working as voice actors. An example is the animated movie A Country Doctor (カフカ 田舎医者, Kafuka: Inaka Isha) by Kōji Yamamura, based on "A Country Doctor" by Franz Kafka, where the voices are performed by the Shigeyama family. As with Noh, many Japanese are familiar with kyōgen only through learning about it in school or television performances. A play frequently featured in textbooks is Busu (附子; The Delicious Poison), where the servants Tarō-kaja and Jirō-kaja are entrusted with some sugar by their master, but told not to eat it, as it is poison; naturally, they eat it. As with Noh, many professional performers are born into a family, often starting performing at a young age, but others are not born into families and beginning practicing in high school or college. Unlike Noh drama or nihonbuyō dance, who earn their living primarily via teaching and support from underlings in the iemoto system, but similar to rakugo comedy, professional kyōgen players earn their living from performing (possibly supplemented by side jobs), and maintain an active touring schedule. Due to the limited repertoire (a classical canon, of which many are no longer performed due to being dated, and few new plays enter) and frequent performances, a professional kyōgen actor can be expected to be familiar with all roles in all plays in their school's repertoire, and to perform them with some regularity. While there are a number of kyōgen families, there are at present two leading families: the Nomura 野村 family of Tokyo (traditionally Edo region), and the Shigeyama 茂山 family of Kyoto (traditionally Kamigata region) of the Ōkura school, both of which are often featured performing on TV, appear in the news, and tour overseas, and have been involved in popularizing and some efforts at modernizing kyōgen. See also the List of Living National Treasures of Japan (performing arts), whose kyōgen members feature individuals from these families, among others. In 1989, Junko Izumi became the first female professional kyōgen performer. In the post-war period, foreigners have participated in kyōgen as amateur performers. A notable early example was the 1956 performance by scholar and translator Donald Keene in the play Chidori (千鳥; Plover) with Tetsuji Takechi in the role of the sake shop owner, before an audience including such prominent authors as Tanizaki, Yasunari Kawabata and Yukio Mishima. This is featured in his series of essays, Chronicles of My Life in the 20th Century, and inspired the title of his anthology The Blue-Eyed Tarokaja: A Donald Keene Anthology. Today foreigners (resident in Japan, with sufficient Japanese skills) are able to practice with amateur troupes. In addition, since 1985, an intensive summer program (originally 6 weeks, now 3 weeks) in kyōgen for beginners has been run at the Kyoto Art Center, taught by Akira Shigeyama (of the Shigeyama family) and others, and organized by scholar of Japanese theater Jonah Salz. Plays There are a few hundred plays in the repertoire (about 180 in the Okura school), but many are now rarely performed, as the audience will not understand the jokes, or would deem them offensive (e.g., for making fun of a blind money-lender). Plays commonly studied and performed by beginners, due to brevity and simplicity, include Shibiri (痿痢; "Cramps", "Pins and Needles"), 舟船, 土筆, 以呂波, and Kuchimane (口真似; The Mimic). Kuchimane in particular is frequently performed. Another well-known play, featured in textbooks, is Busu (附子; "Wolfsbane", "The Delicious Poison"), mentioned above. Another play is 柿山伏 (Kakiyamabushi or "Persimmon Mountain Hermit"), about an ascetic priest who hungers in the mountains; he uncovers and eats from a persimmon tree, which belongs to a farmer. The farmer catches him in the act and makes a fool out of the priest – getting the priest to pretend to be a crow, a monkey, and a large bird, causing him to fall from the tree. The priest later gets his revenge by chanting and summoning supernatural forces. But in the end, the farmer refuses to nurse the priest back to health. See also References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Greece] | [TOKENS: 6070] |
Contents History of the Jews in Greece The history of the Jews in Greece can be traced back to at least the fourth century BCE. The oldest and the most characteristic Jewish group that has inhabited Greece are the Romaniotes, also known as "Greek Jews." The term "Greek Jew" is predominantly used for any Jew that lives in or originates from the modern region of Greece. Aside from the Romaniotes, a distinct Jewish population that historically lived in communities throughout Greece and neighboring areas with large Greek populations, Greece had a large population of Sephardi Jews, and is a historical center of Sephardic life; the city of Salonica or Thessaloniki, in Greek Macedonia, was called the "Mother of Israel." Greek Jews played an important role in the early development of Christianity, and became a source of education and commerce for the Byzantine Empire and throughout the period of Ottoman Greece, until suffering devastation in the Holocaust after Greece was conquered and occupied by the Axis powers. Despite efforts by Greeks to protect them, some 4,000 Jews were deported from the Bulgarian occupation zone to the Treblinka extermination camp. In the aftermath of the Holocaust, a large percentage of the surviving community emigrated to Israel or the United States. As of 2019[update] the Jewish community in Greece amounts to about 6,000 people out of a population of 10.8 million, concentrated mainly in Athens, Thessaloniki (or Salonika in Judeo-Spanish), Larissa, Volos, Chalkis, Ioannina, Trikala, Corfu and a functioning synagogue on Crete, while very few remain in Kavala and Rhodes. Greek Jews today largely "live side by side in harmony" with Christian Greeks, according to Giorgo Romaio, president of the Greek Committee for the Jewish Museum of Greece, while nevertheless continuing to work with other Greeks, and Jews worldwide, to combat any rise of anti-Semitism in Greece. Currently the Jewish community of Greece makes great efforts to establish a Holocaust museum in the country. A permanent pavilion about the Holocaust of Greek Jews in KZ Auschwitz shall be installed. A delegation and the president of the Jewish communities of Greece met in November 2016 with Greek politicians and asked them for support in their demand to get back the community archives of the Jewish community of Thessaloniki from Moscow. Independent candidate Moses Elisaf, a 65-year-old doctor is believed to be the first Jew elected mayor in Greece. He was elected in June 2019. Jewish cultures in Greece Most Jews in Greece are Sephardic, but Greece is also the home of the unique Romaniote culture. Besides the Sephardim and the Romaniotes, some Northern-Italian, Sicilian, Apulian, Provençal, Mizrahi and small Ashkenazi communities have existed as well, in Thessaloniki and elsewhere. All these communities had not only their own custom (minhag), they also had their own siddurim printed for the congregations in Greece. The large variety of Jewish customs in Greece was unique. Romaniote Jews have lived in the territory of today's Greece for more than 2000 years. Their historic language was Yevanic, a dialect of the Greek language, but Yevanic has no surviving speakers recorded; today's Greek Romaniotes speak Greek. Large communities were located in Ioannina, Thebes, Chalcis, Corfu, Arta, Corinth and on the islands of Lesbos, Chios, Samos, Rhodes, and Cyprus, among others. The Romaniotes are historically distinct from the Sephardim, some of whom settled in Greece after the 1492 expulsion of the Jews from Spain. All but a small number of the Romaniotes of Ioannina, the largest remaining Romaniote community not assimilated into Sephardic culture, were killed in the Holocaust. Ioannina today has 35 living Romaniotes. The majority of the Jews in Greece are Sephardim whose ancestors had left Spain, Portugal and Italy. They largely settled in cities such as Thessaloniki, the city which was to be named "Mother of Israel" in the years to come. The traditional language of Greek Sephardim was Judeo-Spanish, and, until the Holocaust, the community "was a unique blend of Ottoman, Balkan and Hispanic influences", well known for its level of education. The Foundation for the Advancement of Sephardic Studies and Culture calls Thessaloniki's Sephardic community "indisputably one of the most important ones in the world." History of Judaism in Greece The first recorded mention of Judaism in Greece dates from 300 to 250 BCE on the island of Rhodes. In the 2nd century BCE, Hyrcanus, a leader in the Jewish community of Athens, was honoured by the raising of a statue in the agora. According to Edmund Veckenstedt, Ganymede was a Semite, as his brothers Ilus and Assarakos were no doubt. According to Josephus (Contra Apionem, I, 176–183), an even earlier mention of a Hellenized Jew by a Greek writer was to be found in the work "De Somno" (not extant) by the Greek historian Clearchus of Soli. Here Clearchus describes the meeting between Aristotle (who lived in the 4th century BCE) and a Jew in Asia Minor, who was fluent in Greek language and thought: "'Well', said Aristotle, [...] 'the man was a Jew of Coele Syria (modern Lebanon). These Jews were derived from the Indian philosophers, and were called by the Indians Kalani. Now this man, who entertained a large circle of friends and was on his way from the interior to the coast, not only spoke Greek but had the soul of a Greek. During my stay in Asia, he visited the same places as I did, and came to converse with me and some other scholars, to test our learning. But as one who had been intimate with many cultivated persons, it was rather he who imparted to us something of his own.'" Archaeologists have discovered ancient synagogues in Greece, including the Synagogue in the Agora of Athens and the Delos Synagogue, dating to the 2nd century BCE. Greek Jews played an important role in Greek history, from the early History of Christianity, through the Byzantine Empire and Ottoman Greece, until the tragic near-destruction of the community after Greece fell to Nazi Germany in World War II. The Macedonian empire under Alexander the Great conquered the former Kingdom of Judah in 332 BC, defeating the Persian empire which had held the territory since Cyrus' conquest of the Babylonians. After Alexander's death, the Wars of the Diadochi led to the territory changing rulership rapidly as Alexander's successors fought over control over the Persian territories. The region eventually came to be controlled by the Ptolemaic dynasty, and the area became increasingly Hellenistic. The Jews of Alexandria created a "unique fusion of Greek and Jewish culture", while the Jews of Jerusalem were divided between conservative and pro-Hellene factions. Along with the influence of this Hellenistic fusion on the Jews who had found themselves part of a Greek empire, Karen Armstrong argues that the turbulence of the period between the death of Alexander and the 2nd century BCE led to a resurgence of Jewish messianism, which would inspire revolutionary sentiment when Jerusalem became part of the Roman Empire. Macedonia and the rest of Hellenistic Greece fell to the Roman Empire in 146 BC. The Jews living in Roman Greece had a different experience than those of Judaea Province. The New Testament describes Greek Jews as a separate community from the Jews of Judaea, and the Jews of Greece did not participate in the First Jewish-Roman War or later conflicts. The Jews of Thessaloniki, speaking a dialect of Greek, and living a Hellenized existence, were joined by a new Jewish colony in the 1st century AD. The Jews of Thessaloniki "enjoyed wide autonomy" in Roman times. Originally a persecutor of the early Jewish Christians until his conversion on the Road to Damascus, Paul of Tarsus, himself a Hellenized Jew from Tarsus, part of the post-Alexander the Great Greek Seleucid Empire, was instrumental in the founding of many Christian churches throughout Rome, including Asia Minor and Greece. Paul's second missionary journey included proselytizing at Thessaloniki's synagogue until driven out of the city by its Jewish community. After the collapse of the Western Roman Empire, elements of Roman civilisation continued on in the Byzantine Empire. The Jews of Greece began to come under increasing attention from Byzantium's leadership in Constantinople. Some Byzantine emperors were anxious to exploit the wealth of the Jews of Greece, and imposed special taxes on them, while others attempted forced conversions to Christianity. The latter pressure met with little success, as it was resisted by both the Jewish community and by the Greek Christian synods. The Sefer Yosippon was written down in the 10th century in the Byzantine south Italy by the Greek-speaking Jewish community there. Judah Leon ben Moses Mosconi, a Romaniote Jew from Achrida edited and expanded the Sefer Josippon later. Tobiah ben Eliezer (טוביה בן אליעזר), a Talmudist and poet of the 11th century, worked and lived in the city of Kastoria. He is the author of the Lekach Tov, a midrashic commentary on the Pentateuch and the Five Megillot and also of some poems. Spanish Jewish explorer Benjamin of Tudela visited Greece during his travels around 1161/1162 CE. After leaving southern Italy and sailing through the Adriatic Sea, he visited Corfu, Thebes, Almyros, and Thessaloniki, before moving on to Constantinople. In Thebes, he reported a Jewish population of 2,000, the largest Jewish community in any Byzantine city of the 12th century, after Constantinople, the empire's capital. The first settlement of Ashkenazi Jews in Greece occurred in 1376, heralding an Ashkenazi immigration from Hungary and Germany to avoid the persecution of Jews throughout the 15th century. Jewish immigrants from France and Venice also arrived in Greece, and created new Jewish communities in Thessaloniki. The Fourth Crusade degraded the position of the Jews in the new Frankish lands on Greek ground which were formerly parts of the Byzantine Empire. The Jews were at that time economically powerful though small in number, comprised a community of their own, separately from the Christians, and dealt in money lending. Greece was ruled by the Ottoman Empire from the mid-15th century, although pockets of Christian rule such as the Duchy of Naxos persisted for longer, until the conclusion of first the Greek War of Independence ending in 1832, and then the First Balkan War in 1913. During this period, the centre of Jewish life in the Balkans was Salonica or Thessaloniki. The Sephardim of Thessaloniki were the exclusive tailors for the Ottoman Janissaries, and enjoyed economic prosperity through commercial trading in the Balkans. After the Alhambra Decree of 1492 expelled the Jewish community from Spain, between fifteen and twenty thousand Sephardic Jews settled in Thessaloniki (then Salonica). According to the Jewish Virtual Library: "Greece became a haven of religious tolerance for Jews fleeing the Spanish Inquisition and other persecution in Europe. The Ottomans welcomed the Jews because they improved the economy. Jews occupied administrative posts and played an important role in intellectual and commercial life throughout the empire." These immigrants established the city's first printing press, and the city became known as a centre for commerce and learning. The exile of other Jewish communities swelled the city's Jewish population, and in 1519, the Jews represented 56% of the population of Thessaloniki, and in 1613, their share was 68%. Ottoman Jews were obliged to pay special "Jewish taxes" to the Ottoman authorities. These taxes included the Cizye, the İspençe, the Haraç, and the Rav akçesi ("rabbi tax"). Sometimes, local rulers would also levy taxes for themselves, in addition to the taxes sent to the central authorities in Constantinople. In the year 1523 the first printed edition of the Mahzor Romania was published in Venice, by Constantinopolitan Jews which contains the Minhag of the Jews from the Byzantine empire. This Minhag represents probably the oldest European Prayer rite. A polyglot edition of the Bible published in Constantinople in 1547 has the Hebrew text in the middle of the page, with a Judaeo-Spanish translation on one side and a Yevanic translation on the other. Joseph Nasi, a Portuguese Marrano Jew, was appointed by the Sultan as Duke of the Archipelago, encompassing the Cyclades islands in Greece, for the years 1566–1579. The Jewish community in Patras, which had existed since antiquity, left the city during the Ottoman–Venetian wars of the 17th century. However, following the Ottoman conquest of the city in 1715, Jews returned and lived there in relative peace. The middle of the 19th century, however, brought a change to Greek Jewish life. The Janissaries had been destroyed in 1826, and traditional commercial routes were being encroached upon by the Great Powers of Europe. The Sephardic population of Thessaloniki had risen to between twenty-five and thirty thousand members, leading to scarcity of resources, fires and hygiene problems. The end of the century saw great improvements, as the mercantile leadership of the Sephardic community, particularly the Allatini family, took advantage of new trade opportunities with the rest of Europe. According to historian Misha Glenny, Thessaloniki was the only city in the Empire where some Jews "employed violence against the Christian population as a means of consolidating their political and economic power", as traders from the Jewish population closed their doors to traders from the Greek and Slav populations and physically intimidated their rivals. With the importation of modern anti-Semitism with immigrants from the West later in the century, moreover, some of Thessaloniki's Jews soon became the target of Greek and Armenian pogroms, and antisemitic incidents elsewhere in Greece such as the Rhodes blood libel of 1840 reflected tensions between the empire's Greek and Jewish communities.Thessaloniki's Jewish community comprised more than half of the city's population until the early 1900s. As a result of the Jewish influence on the city, many non-Jewish inhabitants of Thessaloniki spoke Judeo-Spanish, the language of the Sephardic Jews, and the city virtually shut down on Saturday, the Jewish Sabbath, given it sometimes the name of 'Little Jerusalem." In general loyal to the Ottoman Empire, the Jews of southern Greece did not have a positive stance towards the Greek War of Independence; so often they became also targets by the revolutionaries. The Ottoman rule in Thessaloniki ended much later, in 1912, as Greek soldiers entered the city in the last days of the First Balkan War. Thessaloniki's status had not been decided by the Balkan Alliance before the war, and Glenny writes that some amongst the city's majority Jewish population at first hoped that the city might be controlled by Bulgaria. Bulgarian control would keep the city at the forefront of a national trade network, while Greek control might affect, for those of certain social classes and across ethnic groups, Thessaloniki's position as the destination of Balkan village trading. After the city was conquered by the Greeks in 1913, Thessaloniki Jews were accused of cooperating with the Turks and being traitors, and were subjected to pressure from the Greek army and local Greeks. As a result of the intense coverage of these pressures in the world press, the Venizelos government took a series of measures against antisemitic actions. After liberation, however, the Greek government won the support of the city's Jewish community, and Greece under Eleftherios Venizelos was one of the first countries to accept the Balfour Declaration. In 1934, a large number of Jews from Thessaloniki made aliyah to Mandatory Palestine, settling in Tel Aviv and Haifa. Those who could not get past British immigration restrictions simply came on tourist visas and disappeared into Tel Aviv's Greek community. Among them were some 500 dockworkers and their families, who settled in Haifa to work at its newly constructed port. Later, with the establishment in 1936 of the Metaxas regime, which was not typically hostile to Jews in general despite its fascist character, the stance of the Greek State towards the Jewish community was further improved. During World War II, Greece was conquered by Nazi Germany and occupied by the Axis powers. 12,898 Greek Jews fought in the Greek army, one of the best-known amongst them being Colonel Mordechai Frizis, in a force which first successfully repelled the Italian Army, but was later overwhelmed by German forces. The Germans had been gathering intelligence on Salonica's Jewish community since 1937. Some 60,000-70,000 Greek Jews, or at least 81% of the country's Jewish population, were murdered; especially in jurisdictions occupied by Nazi Germany and Bulgaria. Although the Germans deported a great number of Greek Jews, some were successfully hidden by their Greek neighbours. The losses were significant in places like Thessaloniki, Ioannina, Corfu or Rhodes, where most of the Jewish population were deported and killed. In contrast, larger percentages of Jews were able to survive, where the local population was helpful and hid the persecuted Jews, such as Athens, Larissa or Volos. Perhaps the most important rescue efforts took place in Athens, where some 1,200 Jews were given false identity cards following the efforts of Archbishop Damaskinos and police chief Angelos Ebert. On July 11, 1942, the Jews of Thessaloniki were rounded up in preparation for slave labour. The community paid a fee of 2 billion drachmas for their freedom. Yet 50,000 people were sent to Auschwitz, and most of their 60 synagogues and schools were destroyed, along with the old Jewish cemetery in the center of the city. Only 1,950 survived. Many survivors later emigrated to Israel and the United States. Today the Jewish population of Thessaloniki numbers roughly 1,000, and maintains two synagogues. In Corfu, after the fall of Italian fascism in 1943, the Nazis took control of the island. Corfu's mayor at the time, Kollas, was a known collaborator and various anti-Semitic laws were passed by the Nazis that now formed the occupation government of the island. In early June 1944, while the Allies bombed Corfu as a diversion from the landing in Normandy, the Gestapo rounded up the Jews of the city, temporarily incarcerated them at the old fort (Palaio Frourio) and on the 10th of June sent them to Auschwitz where very few survived. However, approximately two hundred out of a total population of 1,900 managed to flee. Many among the local populace at the time provided shelter and refuge to those 200 Jews that managed to escape the Nazis. As well, a prominent section of the old town is to this day called Evraiki (Εβραική) meaning Jewish suburb in recognition of the Jewish contribution and continued presence in Corfu city. An active Synagogue (Συναγωγή) is an integral part of Evraiki today with about 65 members. On March 4, 1943, Bulgarian soldiers with help from German soldiers took the Jews from Komotini and Kavala off the Karageorge passenger boat, massacred them and sank the boat. The Bulgarians confiscated all of the Jewish properties and possessions. At Thessaloniki individual police officers rescued their Jewish friends and occasionally even their families, while in Athens the chief of police, Angelos Evert, and his men actively supported and rescued Jews. The 275 Jews of the island of Zakynthos, however, survived the Holocaust. When the island's mayor, Loukas Karrer, was presented with the German order to hand over a list of Jews, Metropolitan Bishop Chrysostomos of Zakynthos returned to the Germans with a list of two names; his own and the mayor's. The island's population hid every member of the Jewish community. In 1947, a large number of the Jews of Zakynthos made aliyah to Palestine (later Israel), while others moved to Athens. When the island was almost levelled by the great earthquake of 1953, the first relief came from Israel, with a message that read "The Jews of Zakynthos have never forgotten their Mayor or their beloved Bishop and what they did for us." The city of Volos, which was in the Italian zone of occupation, had a Jewish population of 882, and many Thessaloniki Jews fleeing the Nazis sought sanctuary there. By March 1944, more than 1,000 Jews lived there. In September 1943, when the Nazis took over, head rabbi Moses Pesach worked with Archbishop Ioakeim and the EAM resistance movement to find sanctuary for the Jews in Mount Pelion. Due to their efforts, 74% of the city's Jews were saved. Of the more than 1,000 Jews, only 130 were deported to Auschwitz. The Jewish community remained in Volos after the war, but a series of earthquakes in 1955-57 forced many of the remaining Jews to leave, with most immigrating to Israel or the United States. Only 50 to 60 Jews remain in Volos today. Many Jews from Salonika were put on death-camp work detail, the Sonderkommandos. On 7 October 1944, during the uprising in Auschwitz, they attacked German forces with other Greek Jews, storming the crematoria and killing about twenty guards. A bomb was thrown into the furnace of the crematorium III, destroying the building. Before being massacred by the Germans, insurgents sang a song of the Greek partisan movement and the Greek national anthem. In his book If This Is a Man, one of the most famous works of literature of the Holocaust,[according to whom?] Primo Levi describes the group thus: "those Greeks, motionless and silent as the Sphinx, crouched on the ground behind their thick pot of soup." Those members of the community still alive during 1944 made a strong impression on the author. He noted: "Despite their low numbers their contribution to the overall appearance of the camp and the international jargon spoken is of prime importance." He described a strong patriotic sense among them, writing that their ability to survive in the camps was partly explained by the fact that "they are among the cohesive of the national groups, and from this point of view the most advanced." Recognised for his contributions to the Greek cause early on in the war, Mordechai Frizis became one of the most honoured Greek officers of World War II in the postwar years, with a monument outside the national military academy in Athens. Of the 55,000 Thessaloniki Jews deported to extermination camps in 1943, fewer than 5,000 survived. Many of those who returned found their former homes occupied by Greek families. The Greek government did little to assist the surviving Jewish community with property restoration. Post-war community Following the war, many Greek Jews emigrated to Israel. In August 1949, the Greek government announced that Jews of military age would be allowed to leave for Israel on condition that they renounced their Greek nationality, promise to never return, and take their families with them. The Greek Jews that moved to Israel established several villages, including Tsur Moshe, and many settled in the Florentine, Tel Aviv and the area around Jaffa Harbor. Some also emigrated to the United States, Canada, and Australia. Greece was the first country in Europe after the war to give back to its Jewish community possessions of Jews, that were killed by the Nazis in the Holocaust and the war as resistance fighters, so that the communities had the possibility for consolidation. A Jewish minority continues to live in Greece. There are communities in Athens and Thessaloniki. The community has had a small decrease since the Greek government-debt crisis. As of 2020, about 5,000 Jews live in Greece, mostly in Athens (2500), with less than 1,000 in Thessaloniki. The Greek Jewish community has traditionally been pro-European. Today the Jews of Greece are integrated and are working in all fields of the Greek state and the Greek society, such in the fields of economy, science and politics. The community of Thessaloniki demanded Germany pay the manumission payments back that the Jews of Greece paid to rescue their family members after the Nazis asked for this money but the Nazis hadn't freed the family members anyway. The European Court of Justice dismissed this petition. In World War II the Deutsche Reichsbahn helped the Nazis to deport the Jews from Greece. In 2014, representatives of the Jewish community of Thessaloniki demanded from the Deutsche Bahn, which is the successor of the Deutsche Reichsbahn, reimbursement for the heirs of Holocaust victims of Thessaloniki for the train fares that they were forced to pay for their deportation from Thessaloniki to Auschwitz and Treblinka between March and August 1943. According to the significant Jewish past and present of Thessaloniki the Aristotle University planned together with the Jewish community of Thessaloniki in 2014, the reopening of the Faculty of Jewish Studies. A former Jewish faculty was abolished 80 years before by the Greek dictator Ioannis Metaxas. This new faculty took in October 2015, her work on with leading professor Georgios Antoniou in the faculty of Philosophy. On the university campus a monument commemorating the old Jewish cemetery was unveiled also in 2014. The campus was built partially on this old cemetery. Today, the current Chief Rabbi of Greece is Rabbi Gabriel Negrin. Misha Glenny wrote that Greek Jews had never "encountered anything remotely as sinister as north European anti-Semitism. The twentieth century had witnessed small amounts of anti-Jewish sentiment among Greeks... but it attracted an insignificant minority." The danger of deportation to death camps was repeatedly met with disbelief by Greece's Jewish population. A neo-fascist group, Golden Dawn, exists in Greece and in September 2015 Greek election won 18 seats in the Greek Parliament. Reportedly in 2005, it was officially disbanded, to no avail, by its leadership after conflicts with police and anti-fascists. The European Union Monitoring Centre on Racism and Xenophobia 2002–2003 report on anti-Semitism in Greece mentioned several incidents over the two-year period making note that there were no instances of physical or verbal assaults on Jews, along with examples of "good practices" for countering prejudice. The report concluded that "...in 2003 the Chairman of the Central Jewish Board in Greece stated that he did not consider the rise in antisemitism to be alarming." On 21 November 2003, Nikos Bistis, the Greek Deputy Minister of the Interior, declared January 27 to be Holocaust Remembrance Day in Greece, and committed to a "coalition of Greek Jews, Greek non-Jews, and Jews worldwide to fight antisemitism in Greece." The Greek government debt crisis, which started in 2009, has seen an increase in extremism of all kinds, which has included some cases of antisemitic vandalism. In 2010, the front of the Jewish Museum of Greece was defaced, for the first time ever. On Rhodes, on 26 October 2012, vandals spray-painted the city's Holocaust monument with swastikas. Partly to head off any new-found threat from extremism, thousands of Jewish and non-Jewish Greeks attended Thessaloniki's Holocaust Commemoration in March 2013. The meeting was personally addressed by Greece's prime minister, Antonis Samaras, who delivered a speech to Monastir Synagogue in Thessaloniki. After a period, Alexandros Modiano, a Greek-Jewish politician, has been voted to official duties. Alexandros Modiano works in the City Council of Athens. Today the relations between the Jewish community and the state of Greece are good. Obtaining Greek citizenship for Jews outside Greece The Greek Parliament has decided to give Greek citizenship back to all Holocaust survivors who lost their Greek citizenship when leaving the country. Those who are born outside Greece to either one or both Greek parents, or one or more Greek grandparents, are entitled to stake a claim to their right to a Greek citizenship through their ancestor(s) born in Greece. For the process of obtaining one's Greek citizenship, there is no need to prove the religious denomination of the ancestors. Jewish religious life See also References Sources |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Internal_geometry] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-134] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Manzai] | [TOKENS: 993] |
Contents Manzai Manzai (漫才) is a traditional style of comedy in Japanese culture comparable to double act comedy. Manzai usually involves two performers (manzaishi)—a straight man (tsukkomi) and a funny man (boke)—trading jokes at great speed. Most of the jokes revolve around mutual misunderstandings, double-talk, puns and other verbal gags. In 1933, Yoshimoto Kogyo, a large entertainment conglomerate based in Osaka, introduced Osaka-style manzai to Tokyo audiences and coined the term "漫才" (one of several ways of writing the word manzai in Japanese; see § Etymology below). In recent times, manzai has often been associated with the Osaka region, and manzai comedians often speak in the Kansai dialect during their acts. History Originally based around a festival to welcome the New Year, manzai traces its origins back to the Heian period. The two manzai performers came with messages from the kami and this was worked into a standup routine, with one performer showing some sort of opposition to the word of the other. This pattern still exists in the roles of the boke and the tsukkomi. Continuing into the Edo period, the style focused increasingly on the humor aspects of stand-up, and various regions of Japan developed their own unique styles of manzai, such as Owari manzai (尾張万歳), Mikawa manzai (三河万歳), and Yamato manzai (大和万歳). With the arrival of the Meiji period, Osaka manzai (大阪万才) began to implement changes that would see it surpass in popularity the styles of the former period, although at the time rakugo was still considered the more popular form of entertainment. With the end of the Taishō period, Yoshimoto Kōgyō—which itself was founded at the beginning of the era, in 1912—introduced a new style of manzai lacking much of the celebration that had accompanied it in the past. This new style proved successful and spread all over Japan, including Tokyo. Riding on the waves of new communication technology, manzai quickly spread through the mediums of stage, radio, and eventually, television, and video games. Etymology The kanji for manzai have been written in various ways throughout the ages. It was originally written as lit. "ten thousand years" or banzai, meaning something like "long life" (萬歳), using 萬 rather than the alternative form of the character, 万, and the simpler form 才 for 歳 (which also can be used to write a word meaning "talent, ability"). The arrival of Osaka manzai brought another character change, this time changing the first character to 漫. Boke and tsukkomi Similar in execution to the concepts of "funny man" and "straight man" in double act comedy (e.g. Abbott and Costello; Martin and Lewis), these roles are a very important characteristic of manzai. Boke (ボケ) comes from the verb bokeru (惚ける/呆ける) which carries the meaning of "senility" or "air headed-ness" and is reflected in the boke's tendency for misinterpretation and forgetfulness. The word tsukkomi (突っ込み) refers to the role the second comedian plays in "butting in" and correcting the boke's errors. In performances it is common for the tsukkomi to berate the boke and hit them on the head with a swift smack; one traditional manzai prop often used for this purpose is a pleated paper fan called a harisen (張り扇). Another traditional manzai prop is a small drum, usually carried (and used) by the boke. A Japanese bamboo and paper umbrella is another common prop. These props are usually used only during non-serious manzai routines as traditional manzai requires there to be no props in terms of routine and in competitions. The use of props would put the comedy act closer to a conte rather than manzai. The tradition of tsukkomi and boke is often used in other Japanese comedy, although it may not be as obviously portrayed as it usually is in manzai.[citation needed] Notable manzai acts The funniest manzai duos, according to a web survey by The Asahi Shimbun in 2012 (excerpt): Gen Takagi is a famous manzai comedian who brought manzai comedy to Finland and even had his own competition. Literary associations See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Cauchy_horizon] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Point-contact_transistor] | [TOKENS: 683] |
Contents Point-contact transistor The point-contact transistor was the first type of transistor to be successfully demonstrated. It was developed by research scientists John Bardeen and Walter Brattain at Bell Laboratories in December 1947. They worked in a group led by physicist William Shockley. The group had been working together on experiments and theories of electric field effects in solid state materials, with the aim of replacing vacuum tubes with a smaller device that consumed less power. The critical experiment, carried out on December 16, 1947, consisted of a block of germanium, a semiconductor, with two very closely spaced gold contacts held against it by a spring. Brattain attached a small strip of gold foil over the point of a plastic triangle—a configuration which is essentially a point-contact diode. He then carefully sliced through the gold at the tip of the triangle. This produced two electrically isolated gold contacts very close to each other. The piece of germanium used a surface layer with an excess of electrons. When an electric signal traveled in through the gold foil, it injected electron holes (points which lack electrons). This created a thin layer which had a scarcity of electrons. A small positive current applied to one of the two contacts had an influence on the current which flowed between the other contact and the base upon which the block of germanium was mounted. In fact, a small change in the first contact current caused a greater change in the second contact current; thus it was an amplifier. The low-current input terminal into the point-contact transistor is the emitter, while the output high-current terminals are the base and collector. This differs from the later type of bipolar junction transistor invented in 1951 that operates as transistors still do, with the low-current input terminal as the base and the two high-current output terminals as the emitter and collector. The point-contact transistor was commercialized and sold by Western Electric and others but was eventually superseded by the bipolar junction transistor, which was easier to manufacture and more rugged. The point-contact transistor did still remain in production until the 1960s, by which time the silicon planar transistor was dominating the market. Forming While point-contact transistors usually worked fine when the metal contacts were simply placed close together on the germanium base crystal, it was desirable to obtain as high an α current gain as possible. To obtain a higher α current gain in a point-contact transistor, a brief high-current pulse was used to modify the properties of the collector point of contact, a technique called 'electrical forming'. Usually this was done by charging a capacitor of a specified value to a specified voltage then discharging it between the collector and the base electrodes. Forming had a significant failure rate, so many commercial encapsulated transistors had to be discarded. While the effects of forming were understood empirically, the exact physics of the process could never be adequately studied and thus no clear theory was ever developed to explain it or provide guidance on improving it. Unlike later semiconductor devices, it was possible for an amateur to make a point-contact transistor, starting with a germanium point-contact diode as a source of material (even a burnt-out diode could be used; and the transistor could be re-formed if damaged, several times if necessary). Characteristics Some characteristics of point-contact transistors differ from the slightly later junction transistors: See also References Further reading External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.