text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Jewish_art] | [TOKENS: 3088]
Contents Jewish art Jewish art, or the art(s) of the Jewish people, encompasses a diverse range of creative endeavors and time periods, spanning from antiquity to the Modern period, culminating in the artistic movements of the Haskalah, and the visual arts of the Yishuv and modern Israel, as well as – throughout all periods of Jewish history – the diverse work of the Diaspora. Jewish art encompasses the visual plastic arts, sculpture, painting, and more, all influenced by Jewish culture, history, and religious beliefs. Jewish artistic expression traces back to the art of the ancient Israelites in the Land of Israel, where it originated and evolved during the Second Temple Period, influenced by various empires. This artistic tradition underwent further development during the Mishnaic and Talmudic eras, reflecting cultural and religious shifts within Jewish communities. With the dispersion of Jews across the globe, known as the Jewish diaspora, artistic production persisted throughout the millennia, adapting to diverse cultural landscapes while retaining distinct Jewish themes and motifs. Until the emancipation, Jewish art was mostly centered around religious practices and rituals. Following the emancipation in the early modern period, Jewish artists, notably in Europe began to explore different themes, with different levels of connection to religious art. Notably, Jews in France, some of whom from fleeing from Eastern Europe, produced at times modernist art of completely secular nature. Later in the first half of the 20th century, a group composed mainly of these Eastern European Jews fleeing from persecution were known as the School of Paris. From the mid to late 20th century, following The Holocaust and the immigration of Jews to modern Israel, Israel re-emerged as a center of Jewish art while Europe declined in its importance as a center of Jewish culture. Pre-Second Temple period Prior to the First Temple Period and throughout its duration, literary sources point to the existence of craftsmanship which could be considered both art in its restrictive sense and natively Jewish. This was largely related to matters of ritual, such as the decoration of the Tabernacle, and the Temple that replaced it. Within this context, a number of figurative characters were present, such as the cherubs of the Ark of the Covenant and of the Solomonic Holy of Holies, and the twelve bronze oxen which formed the base of the Molten Sea. Artifacts bearing plastic depictions, such as the plaques unearthed in King Ahab's "House of Ivory" in Samaria and Israelite seals found in many locations in the land of Israel, appear to be influenced by Phoenician, Assyrian or Egyptian styles. Second Temple period and late antiquity In the Second Temple period, Jewish art was heavily influenced by the Biblical injunction against graven images, leading to a focus on geometric, floral, and architectural motifs rather than figurative or symbolic representations. This artistic restraint was a response to the Hellenistic cultural pressures that threatened Jewish religious practices, notably the imposition of idolatry. Symbolic elements like the menorah and the shewbread table were sparingly used, primarily reflecting their significance in priestly duties. However, the rise of Christianity and its establishment as the dominant religion of the Roman Empire marked a turning point in Jewish artistic expression. This period, known as Late Antiquity, witnessed Jewish communities gradually incorporating symbolic motifs into their synagogal and funerary art. The expansion of these symbols beyond the menorah and the shewbread table to include other ritual objects and emblems signified a broader expression of Jewish identity. This shift in cultural representation aimed to affirm Jewish faith and community following the rise of Christian dominance in the Mediterranean region, making symbols like the menorah emblematic of national identity as well as religious faith. The menorah, initially a representation of priestly duties in the Second Temple, evolved into a central symbol of Jewish identity, especially after the Temple's destruction. Its depiction in Jewish art, ranging from synagogue mosaics to catacombs, signified not only the religious importance of the Temple but also served as a distinguishing marker of Jewish places of worship and burial. Scholars debate the menorah's symbolism, with interpretations ranging from its seven branches representing divine light, the seven planets, or the days of the week, reflecting its integral role in both daily rituals and as a symbol of Judaism itself. The shewbread table, alongside other ritual objects such as the lulav, etrog, shofar, and flask, also played significant roles in Jewish art, marking the continuation of Temple traditions in diaspora communities. These objects, alongside depictions of the Temple, the Ark of the Scrolls, and the Ark of the Covenant, are part of an array of symbols used by Jewish communities to express and maintain their religious and cultural identity. Medieval Jewish art During the medieval period (roughly the 5th to 15th centuries), Jewish communities continued to produce works of Jewish art, with most of the art centered around religious life, notably synagogues and religious texts. Jewish scholars and texts, including works by luminaries like Rashi and Maimonides, often featured illustrations, some of which were crafted by artists who also served Christian clients, with notable connections between Jewish and Christian artists. The Florentine artist Mariano del Buono and the Master of the Barbo Missal, known for their work for Christian patrons, also created significant Jewish pieces. Ritual objects such as Hanukkah lamps and kiddush cups, while prescribed by Jewish law, evolved in form and decoration over time, often mirroring the luxury items and aesthetic preferences of their Christian counterparts. This adaptability and integration are further evidenced in medieval synagogue architecture, which frequently borrowed elements from contemporary Christian buildings, as seen in the synagogues in Central Europe such as those in Regensburg and Prague, which incorporate Gothic styles and motifs. From the time of the Muslim Conquests to the Renaissance in Europe, most Jews lived in the Muslim world. Jewish art made there reflected the characteristics of Islamic art. A 2024 exhibition at The Magnes Collection of Jewish Art and Life explored how the two cultures "shared graphic forms and visual landscapes, attitudes towards sacred texts and human bodies, and networks of trade and knowledge exchange." A 2025 exhibition at the Grolier Club of material from the Jewish Theological Seminary displayed manuscript pages from Jewish communities in Yemen, North Africa, Iran, and Iraq. Jewish art from the Islamic world, like Islamic art, featured "elaborate patterns, floral motifs and very few depictions of people." Artifacts from this era reflected the cultural exchanges between Jews and Christians, often as a result of intense theological dialogue and mutual curiosity between the two faiths. Christian scholars' efforts to learn Hebrew, challenge Jewish beliefs, or the portrayal of Jews and Jewish practices in Christian art with remarkable accuracy, suggest according to the Met, an interaction that was both intellectual and artistic. Objects such as the bronze menorah in the Cathedral of Essen and the head of King David from Notre-Dame de Paris are pointed to as examples of such artworks. Jewish manuscripts during the medieval period, notably in medieval Spain were illuminated with visual imagery. The Sarajevo Passover Haggadah, originating in Northern Spain in the 14th century is a notable example. The Golden Haggadah, originating in Catalonia exhibit Gothic and Italianate influences. Early modern period Jewish art continued to be projected through sacred spaces and religious art. The exteriors of synagogues, particularly notable in the Polish-Lithuanian Commonwealth, were often unassuming, with plain facades that concealed their richly decorated interiors. This contrast underscored a Jewish philosophical notion wherein the sacred resides hidden within the mundane, a concept mirrored in the architectural dichotomy between the exterior and interior of these religious buildings. The internal beauty of these synagogues, adorned with detailed paintings and elaborate designs, was in stark contrast to their modest exteriors, a dichotomy driven by a desire to avoid provoking Christian antagonism and adhering to restrictions imposed by Christian authorities, such as limitations on the height of Jewish religious buildings. Such restrictions led to innovative architectural solutions, including lowering the floors of synagogues to create a sense of increased interior height, a practice echoing the biblical verse "I call to you from the depths, O Lord" (Ps. 130:1). This approach not only adhered to the legal constraints but also enriched the spiritual ambiance of the synagogue space. In Italy, synagogues were often discreetly integrated into the upper floors of tenements within ghettos, their exteriors giving no hint of the opulent Baroque interiors within. This concealment extended beyond the synagogues' architecture to their urban placement, with some synagogues in Central Europe being hidden behind courtyards or other buildings, as seen in Düsseldorf and Vienna. This strategic concealment served both to comply with external regulations and to safeguard the sanctity and security of the Jewish worship space. Following the emancipation Hebrew Judeo-Aramaic Judeo-Arabic Other Jewish diaspora languages Jewish folklore Jewish poetry The Napoleonic code written under Napoleon Bonaparte's French Empire liberated the Jews who had been restricted to ghettos and marginalized economically and politically. The Napoleonic Code, also initiated Jewish emancipation across Europe, granting religious freedom to Jews, Protestants, and Freemasons. This act of liberation extended to territories conquered by the First French Empire, where Napoleon abolished laws that confined Jews to ghettos and restricted their rights. By 1808, he further integrated French Judaism into the state, establishing the national Israelite Consistory alongside recognized Christian cults, thereby formally acknowledging Jewish communities within French society for the first time. As Jews were emancipated and gained civil rights, they began to integrate into mainstream society and work in occupations limited to them beforehand, Jews could become mainstream artists and were increasingly influenced by the prevailing cultural and artistic movements of their time. These artists also began to create art beyond religious texts and spaces and engage in secular arts. This period also saw an increase in Jewish patronage of the arts. Early critics like Majer Bałaban viewed Jewish art broadly, including any object that exhibited “features of Jewish creativity,” while Abram Efros contended that Jewish artists should be recognized within the national contexts of their residence, arguing, “Jewish artists belong to the art of the country where they live and work”. Following the emancipation, figures such as Maurycy Gottlieb blurred traditional boundaries, integrating Jewish themes into a broader Christian iconographic tradition, laying foundational elements for Jewish genre painting. The late 19th and early 20th centuries with the rise of Jewish nationalism added an ideological dimension to Jewish art, with Jewish genre painting used by some as medium for expressing Zionist revival and the Jewish experience of exile. Religious art and architecture manifested also in wooden synagogues in Eastern Europe which would eventually be destroyed by the Nazis in the Second World war. The works of artists such as Szmul Hirszenberg and Izidor Kaufmann showcased an interweaving of Jewish narratives with a universal moral vocabulary, drawing mainly on Christian allegories to depict Jewish suffering and resilience. Their art, while deeply rooted in Jewish experiences, mirrored the allegorical and dramatic modes prevalent in Christian painting, responding to the artistic and ideologies of the time. An example being Hirszenberg's works, such as "Golus" and "Czarny Sztandar" (The Black Banner, 1907, Jewish Museum, New York), used Christian allegories to communicate broader themes of exile, suffering, and redemption, embodying the tension between death and resurrection characteristic of Christian imagery. Modern period The Paris School or École de Paris, ("the School of Paris" in French) is a term coined in 1925 by art critic André Warnod, said to represent a diverse group of artists, many of Jewish origin from Eastern Europe, who settled in Montparnasse, Paris. Many of these Jewish artists arrived in Paris seeking artistic education and having fled from persecution, particularly in Eastern Europe. The École de Paris included notable figures such as Marc Chagall, Jules Pascin, Chaïm Soutine, Yitzhak Frenkel Frenel, Amedeo Modigliani, and Abraham Mintchine. Their work often depicted Jewish themes and expressed deep emotional intensity, reflecting their experiences of discrimination, pogroms, and the upheavals of the Russian Revolution. The art of these artists, especially those of Eastern European origin is said to have reflected in expressionist works the plight and suffering of the Jewish people. Despite facing xenophobia and criticism from some quarters, these artists played a central role in the vibrant artistic community of Paris, frequenting cafes, communicated in Yiddish and contributed significantly to its status as the capital of the art world. The School of Paris ebbed away following the Nazi occupation of France and the Holocaust, during which several Jewish artists were murdered or died of disease. Several of the artists, such as Marc Chagall, dispersed to Israel and the United States. In Israel, the influence of the École de Paris persisted from the 1920s through the 1940s, with French art and especially French Jewish artists continuing to shape the Israeli art scene for decades. The return of École de Paris artist Yitzhak Frenkel Frenel to Pre-Independence Israel in 1925 and the establishment of the Histadrut Art Studio marked the beginning of this influence. His students, upon returning from Paris, further amplified the French artistic influence in Pre-Independence Israel. This period saw artists in Tel Aviv and Safed creating works that portrayed humanity and emotion, often with a dramatic and tragic quality reflective of Jewish experiences. Safed, one of the holy cities of Judaism, in particular, became a center for artists influenced by the École de Paris in the mid to late 20th century. Its mystical and romantic setting attracted artists like Moshe Castel and Yitzhak Frenkel Frenel, who sought to capture the city's spiritual essence and dynamic landscapes. In the early 20th century the Bezalel School of Arts and Crafts in 1906 was founded by Boris Schatz, blending European Art Nouveau with local artistic traditions. This period also saw the emergence of modern art movements and a shift towards a more subjective artistic expression, challenging the traditional confines of Bezalel's artistic doctrine. With the establishment of studios such as the Histadrut art studio and exhibitions oriented toward modern art following the introduction of the influence of the École de Paris, Tel Aviv emerging as a cultural hub, in time replacing Jerusalem as the country's prominent art centre. During the early 20th century, artists began to settle in Safed, leading to the establishment of the Artist's Quarter of Tzfat which catalyzed what is at times referred to as a "golden age of art" in the city, spanning the 1950s through the 1970s. This era also saw the rise of significant art movements such as the Canaanite and New Horizons movements, further diversified the Israeli art scene. See also References Sources
========================================
[SOURCE: https://en.wikipedia.org/wiki/PL-11] | [TOKENS: 203]
Contents PL-11 PL-11 is a high-level machine-oriented programming language for the PDP-11, developed by R.D. Russell of CERN in 1971. Written in Fortran IV, it is similar to PL360 and is cross-compiled on other machines. PL-11 was originally developed as part of the Omega project, a particle physics facility operational at CERN (Geneva, Switzerland) during the 1970s. The first version was written for the CII 10070, a clone of the XDS Sigma 7 built in France. Towards the end of the 1970s it was ported to the IBM 370/168, then part of CERN's computer centre. In 1974, it was ported to the Burroughs B6700 at Massey University in New Zealand. A report describing the language is available from CERN. References This programming-language-related article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Parallel_36%C2%B030%E2%80%B2_north] | [TOKENS: 1349]
Contents Parallel 36°30′ north Download coordinates as: The parallel 36°30′ north (pronounced 'thirty-six degrees and thirty arcminutes') is a circle of latitude that is 36+1⁄2 degrees north of the equator of the Earth. This parallel of latitude is particularly significant in the history of the United States as the line of the Missouri Compromise, which was used to divide the prospective slave and free states east of the Mississippi River, with the exception of Missouri, which is mostly north of this parallel. The line continues to hold cultural, economic, and political significance to this day; the Kinder Institute for Urban Research defines the Sun Belt as being south of 36°30′N latitude. In colonial America The parallel was the Royal Colonial Boundary of 1665. In the United States In the United States, the parallel 36°30′ forms part of the boundary between Tennessee and Kentucky, in the region west of the Tennessee River and east of the Mississippi River. This parallel also forms part of the boundary between Missouri and Arkansas in the region west of the St. Francis River, and part of the boundary between the Oklahoma Panhandle and the Texas Panhandle. The rest of the boundaries between Virginia and North Carolina; between Virginia and Tennessee; and between Tennessee and Kentucky lie close to the parallel 36°30′. The boundary between Kentucky and Tennessee was defined as 36°30′, based on the Royal Colonial Boundary of 1665 that set the boundary of the Colony of Virginia and the Province of Carolina. In 1779 and 1780, surveyors were sent to mark the line on the ground as far as the Tennessee River. As they worked west their line drifted north until by the time they reached the river they were about 10′ north of 36°30′. Despite this error the boundary was set along the line surveyed. The final part of the Kentucky-Tennessee boundary, between the Tennessee River and the Mississippi River, was not surveyed until after 1819 when treaties extinguished Native American claims in the area. The final portion was surveyed east from the Mississippi River along 36°30′. Due to the relative precision of the survey of 36°30′ on the Mississippi River, Congress decided to continue the line west as the northern boundary of Arkansas Territory, with the exception of the Missouri Bootheel. The parallel 36°30′ north is part of a nearly straight east–west line of state borders (with small variations) starting on the East Coast of the United States, beginning with the border between Virginia and North Carolina. However, this boundary and the one between Kentucky and Tennessee lie a few miles north of 36°30′ in places. The line west of Arkansas is slightly further north at 37°. In southeastern Missouri, the Missouri Bootheel along the Mississippi River extends about 50 miles (80 kilometers) to the south, all the way to the 36th parallel north, and about 30 miles (50 kilometers) inland. This was because politicians in that region along that major river felt that it would be advantageous to be located in Missouri rather than in the Arkansas Territory, which became the State of Arkansas in 1836. The parallel 36°30′ then forms the rest of the boundary between Missouri and Arkansas. The Missouri Compromise of 1820 established the latitude 36°30′ as the northern limit for slavery to be legal in the territories of the west. As part of this compromise, Maine (formerly a part of Massachusetts) was admitted as a free state. This addition maintained the balance of power in the U.S. Senate between the free states and the slaveholding states. The bulk of Missouri lies north of the 36°30′ line, but Southern planters who lived in southeastern Missouri supported slavery, especially for farming on their cotton plantations. Hence, part of the Missouri Compromise arose from this. Also, the slave states of the Southern United States wanted to have the support of another slave state so the Senate could not abolish slavery in the United States. This situation remained in effect for decades because as the free states of Michigan, Wisconsin, and Iowa were admitted to the Union, the new slave states of Arkansas, Florida, and Texas were also admitted. When the Republic of Texas joined the United States in 1845 as a slave state, it was required to cede all of its claimed land north of the 36°30′ latitude to the Federal Government. Over the following half-century, this land became parts of Kansas, Colorado, New Mexico, and Oklahoma. The Compromise of 1850 confirmed that the 36°30′ parallel was the northmost boundary of Texas. Then Kansas was admitted to the Union as a free state in 1861. The creation of the New Mexico Territory and the Utah Territory in 1850, the Kansas Territory in 1854, and the Colorado Territory in 1861 moved the boundaries of one of the western territories, New Mexico, north to the 37th parallel north. New Mexico Territory was eventually split in two states, New Mexico and Arizona, which were admitted in 1912, but this was long after the 13th Amendment had abolished slavery in all of the United States. The gap between the northern boundary of Texas on the parallel 36°30′ north and the southern boundaries of Kansas and Colorado on the parallel 37° north created the No Man's Land that later became the Oklahoma Panhandle in 1889. While a significant part of Nevada (containing Las Vegas) is south of 36°30′, at the time of the admission of Nevada in 1864, it was in the New Mexico Territory. This land was not split off from the new Arizona Territory until 1871, when it was given to Nevada by the Federal government. The Compromise of 1850 made no attempt to divide California along the line of 36°30′, or to allow slavery south of it; the social and political conditions created by the California Gold Rush ruled out any such idea, and California was admitted to the union as a free state. During the American Civil War (1861–65), all of the states located wholly south of 36°30′ north joined the Confederate States of America. All of the states with land north of the parallel, except Virginia, stayed in the Union, although Kentucky and Missouri had Confederate legislatures that were elected in parallel with their regular legislatures. Around the world Starting at the Prime Meridian and heading eastwards, the parallel 36°30′ north passes through: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Google_Chrome] | [TOKENS: 11768]
Contents Google Chrome Google Chrome is a cross-platform web browser developed by Google. It was launched in 2008 for Microsoft Windows, built with free software components from Apple WebKit and Mozilla Firefox. Versions were later released for Linux, macOS, iOS, iPadOS, and Android, where it is currently the default browser. The browser is also the main component of ChromeOS, on which it serves as the platform for web applications. Most of Chrome's source code comes from Google's free and open-source software project known as Chromium, but Chrome is licensed as proprietary freeware. WebKit was the original rendering engine, but Google eventually forked it to create the Blink engine; every Chrome variant except iOS used Blink as of 2017. As of December 2025,[update] StatCounter estimates that Chrome has a 75.23% worldwide browser market share on personal computers. It is the most-used browser on tablets (having far surpassed Safari), and is also dominant on smartphones. With a market share of 71.22% across all platforms combined as of December 2025[update], Chrome is the most used web browser in the world. Google chief executive Eric Schmidt was previously involved in the "browser wars" (a part of U.S. corporate history) and was against the expansion of the company into such a new area. However, Google co-founders, Sergey Brin and Larry Page, spearheaded a software demonstration that pushed Schmidt into making Chrome a core business priority, which resulted in it becoming a commercial success. Because of the proliferation of Chrome, Google has expanded the "Chrome" brand name to other products. This includes ChromeOS, Chromecast, Chromebook, Chromebit, Chromebox, and Chromebase.[citation needed] History Google chief executive Eric Schmidt opposed the development of an independent web browser for six years. He stated that "at the time, Google was a small company", and he did not want to go through "bruising browser wars". Company co-founders Sergey Brin and Larry Page hired several Mozilla Firefox developers and built a demonstration of Chrome. Afterwards, Schmidt said, "It was so good that it essentially forced me to change my mind." In September 2004, rumors of Google building a web browser first appeared. Online journals and U.S. newspapers stated at the time that Google was hiring former Microsoft web developers, among others. It also came shortly after the release of Mozilla Firefox 1.0, which was surging in popularity and taking market share from Internet Explorer, which had noted security problems. Chrome is based on the open-source code of the Chromium project. Development of the browser began in 2006, spearheaded by Sundar Pichai. Google has since become the world's most popular search engine. 90% of searches on search engines come from Google users. The release announcement was originally scheduled for September 3, 2008, and a comic by Scott McCloud was to be sent to journalists and bloggers explaining the features within the new browser. Copies intended for Europe were shipped early, and German blogger Philipp Lenssen of Google Blogoscoped made a scanned copy of the 38-page comic available on his website after receiving it on September 1, 2008. Google subsequently made the comic available on Google Books, and mentioned it on their official blog along with an explanation for the early release. The product was named "Chrome" as an initial development project code name, because it is associated with fast cars and speed. Google kept the development project name as the final release name, as a "cheeky" or ironic moniker, as one of the main aims was to minimize the user interface chrome. The browser was first publicly released, officially as a beta version, on September 2, 2008, for Windows XP and newer, and with support for 43 languages, and later as a "stable" public release on December 11, 2008. On that same day, a CNET news item drew attention to a passage in the Terms of Service statement for the initial beta release, which seemed to grant Google a license to all content transferred via the Chrome browser. This passage was inherited from the general Google terms of service. Google responded to this criticism immediately by stating that the language used was borrowed from other products, and removed this passage from the Terms of Service. Chrome quickly gained about 1% usage share. After the initial surge, usage share dropped until it hit a low of 0.69% in October 2008. It then started rising again and by December 2008, Chrome again passed the 1% threshold. In early January 2009, CNET reported that Google planned to release versions of Chrome for macOS and Linux in the first half of the year. The first official macOS and Linux developer previews of Chrome were announced on June 4, 2009, with a blog post saying they were missing many features and were intended for early feedback rather than general use. In December 2009, Google released beta versions of Chrome for macOS and Linux. Google Chrome 5.0, announced on May 25, 2010, was the first stable release to support all three platforms. Chrome was one of the twelve browsers offered on BrowserChoice.eu to European Economic Area users of Microsoft Windows in 2010. Chrome was assembled from 25 different code libraries from Google and third parties such as Mozilla's Netscape Portable Runtime, Network Security Services, NPAPI (dropped as of version 45), Skia Graphics Engine, SQLite, and several other open-source projects. The V8 JavaScript virtual machine was considered a sufficiently important project to be split off (as was Adobe/Mozilla's Tamarin) and handled by a separate team in Denmark coordinated by Lars Bak. According to Google, existing implementations were designed "for small programs, where the performance and interactivity of the system weren't that important", but web applications such as Gmail "are using the web browser to the fullest when it comes to DOM manipulations and JavaScript", and therefore would significantly benefit from a JavaScript engine that could work faster.[citation needed] Chrome initially used the WebKit rendering engine to display web pages. In 2013, they forked the WebCore component to create their own layout engine, Blink. Based on WebKit, Blink only uses WebKit's "WebCore" components, while substituting other components, such as its own multi-process architecture, in place of WebKit's native implementation. Chrome is internally tested with unit testing, automated testing of scripted user actions, fuzz testing, as well as WebKit's layout tests (99% of which Chrome is claimed to have passed), and against commonly accessed websites inside the Google index within 20–30 minutes. Google created Gears for Chrome, which added features for web developers typically relating to the building of web applications, including offline support. Google phased out Gears as the same functionality became available in the HTML5 standards. In March 2011, Google introduced a new simplified logo to replace the previous 3D logo that had been used since the project's inception. Google designer Steve Rura explained the company's reasoning for the change: "Since Chrome is all about making your web experience as easy and clutter-free as possible, we refreshed the Chrome icon to better represent these sentiments. A simpler icon embodies the Chrome spirit – to make the web quicker, lighter, and easier for all." On January 11, 2011, the Chrome product manager, Mike Jazayeri, announced that Chrome would remove H.264 video codec support for its HTML5 player, citing the desire to bring Google Chrome more in line with the currently available open codecs available in the Chromium project, which Chrome is based on. Despite this, on November 6, 2012, Google released a version of Chrome on Windows which added hardware-accelerated H.264 video decoding. In October 2013, Cisco announced that it was open-sourcing its H.264 codecs, and it would cover all fees required. On February 7, 2012, Google launched Google Chrome Beta for Android 4.0 devices. On many new devices with Android 4.1 or later preinstalled, Chrome is the default browser. In May 2017, Google announced a version of Chrome for augmented reality and virtual reality devices. Features Google Chrome features a minimalistic user interface, with its user-interface principles later being implemented in other browsers. For example, the merging of the address bar and search bar into the omnibox or omnibar. The first release of Google Chrome passed both the Acid1 and Acid2 web standards compliance tests. Beginning with version 4.0, Chrome passed all aspects of the Acid3 test, However, as of April 2017 Chrome no longer passes Acid3 due to changing consensus on Web standards. As of May 2011,[update] Chrome has very good support for JavaScript/ECMAScript according to Ecma International's ECMAScript standards conformance Test 262 (version ES5.1 May 18, 2012). This test reports as the final score the number of tests a browser failed; hence, lower scores are better. In this test, Chrome version 37 scored 10 failed/11,578 passed. For comparison, Firefox 19 scored 193 failed/11,752 passed, and Internet Explorer 9 had a score of 600+ failed, while Internet Explorer 10 had a score of 7 failed.[citation needed] In 2011, on the official CSS 2.1 test suite by the standardization organization W3C, WebKit, the Chrome rendering engine, passed 89.75% (89.38% out of 99.59% covered) CSS 2.1 tests. On the HTML5 web standards test, Chrome 41 scored 518 out of 555 points, placing it ahead of the five most popular desktop browsers. Chrome 41 on Android scored 510 out of 555 points. Chrome 44 scored 526, only 29 points less than the maximum score. By default, the main user interface includes back, forward, refresh/cancel, and menu buttons. A home button is not shown by default, but can be added through the Settings page to take the user to the new tab page or a custom home page. Tabs are the main component of Chrome's user interface and have been moved to the top of the window rather than below the controls. This subtle change contrasts with many existing tabbed browsers, which are based on windows and contain tabs. Tabs, with their state, can be transferred between window containers by dragging. Each tab has its own set of controls, including the Omnibox. The Omnibox is a URL box that combines the functions of both the address bar and search box. If a user enters the URL of a site previously searched from, Chrome allows pressing Tab to search the site again directly from the Omnibox. When a user starts typing in the Omnibox, Chrome provides suggestions for previously visited sites (based on the URL or in-page text), popular websites (not necessarily visited before – powered by Google Instant), and popular searches. Although Instant can be turned off, suggestions based on previously visited sites cannot be turned off. Chrome will also autocomplete the URLs of sites visited often. If a user types keywords into the Omnibox that do not match any previously visited websites and presses enter, Chrome will conduct the search using the default search engine.[citation needed] One of Chrome's differentiating features is the New Tab Page, which can replace the browser home page and is displayed when a new tab is created. Originally, this showed thumbnails of the nine most visited websites, along with frequent searches, recent bookmarks, and recently closed tabs; similar to Internet Explorer and Firefox with Google Toolbar, or Opera's Speed Dial. In Google Chrome 2.0, the New Tab Page was updated to allow users to hide thumbnails they did not want to appear. Starting in version 3.0, the New Tab Page was revamped to display thumbnails of the eight most visited websites. The thumbnails could be rearranged, pinned, and removed. Alternatively, a list of text links could be displayed instead of thumbnails. It also features a "Recently closed" bar that shows recently closed tabs and a "tips" section that displays hints and tricks for using the browser. Starting with Google Chrome 3.0, users can install themes to alter the appearance of the browser. Many free third-party themes are provided in an online gallery, accessible through a "Get themes" button in Chrome's options. Chrome includes a bookmarks submenu that lists the user's bookmarks, provides easy access to Chrome's Bookmark Manager, and allows the user to toggle a bookmarks bar on or off.[citation needed] On January 2, 2019, Google introduced Native Dark Theme for Chrome on Windows 10. In 2023, it was announced that Chrome would be completely revamped, using Google's Material You design language, the revamp would include more rounded corners, Chrome colors being swapped out for a similar dynamic color system introduced in Android 12, a revamped address bar, new icons and tabs, and a more simplified 3 dot menu. Starting with Google Chrome 4.1, the application added a built-in translation bar using Google Translate. Language translation is currently available for 52 languages. When Chrome detects a foreign language other than the user's preferred language set during the installation time, it asks the user whether or not to translate.[citation needed] Chrome allows users to synchronize their bookmarks, history, and settings across all devices with the browser installed by sending and receiving data through a chosen Google Account, which in turn updates all signed-in instances of Chrome. This can be authenticated either through Google credentials or a sync passphrase.[citation needed] For web developers, Chrome has an element inspector that allows users to look inside any web page's Document Object Model (DOM) structure and examine the code elements that make up the webpage. Chrome has special URLs that load application-specific pages instead of websites or files on disk. Chrome also has a built-in ability to enable experimental features. Originally called about:labs, the address was changed to about:flags to make it less obvious to casual users. The desktop edition of Chrome can save pages as HTML with assets in a "_files" subfolder, or as an unprocessed HTML-only document. It also offers an option to save in the MHTML format. Chrome allows users to make local desktop shortcuts that open web applications in the browser. The browser, when opened in this way, contains none of the regular interface except for the title bar, so as not to "interrupt anything the user is trying to do". This allows web applications to run alongside local software (similar to Mozilla Prism and Fluid). This feature, according to Google, would be enhanced with the Chrome Web Store, a one-stop web-based web applications directory which opened in December 2010. In September 2013, Google started making Chrome apps "For your desktop". This meant offline access, desktop shortcuts, and less dependence on Chrome—apps launch in a window separate from Chrome, and look more like native applications. Announced on December 7, 2010, the Chrome Web Store allows users to install web applications as extensions to the browser, although most of these extensions function simply as links to popular web pages or games, some of the apps, like Springpad, do provide extra features like offline access. The themes and extensions have also been tightly integrated into the new store, allowing users to search the entire catalog of Chrome extras. The Chrome Web Store was opened on February 11, 2011, with the release of Google Chrome 9.0. Browser extensions can modify Google Chrome. They are supported by the browser's desktop edition, but not on mobile. These extensions are written using web technologies like HTML, JavaScript, and CSS. They are distributed through Chrome Web Store, initially known as the Google Chrome Extensions Gallery. Some extensions focus on providing accessibility features. Google Tone is an extension developed by Google that when enabled, can use a computer's speakers to exchange URLs with nearby computers with an Internet connection that have the extension enabled as well. On September 9, 2009, Google enabled extensions by default on Chrome's developer channel and provided several sample extensions for testing. In December, the Google Chrome Extensions Gallery beta began with approximately 300 extensions. It was launched on January 25, 2010, along with Google Chrome 4.0, containing approximately 1500 extensions. In 2014, Google started preventing some Windows users from installing extensions not hosted on the Chrome Web Store. The following year Google reported a "75% drop in customer support help requests for uninstalling unwanted extensions" which led them to expand this restriction to all Windows and Mac users. In October 2018, Google announced a major future update to Chrome's extension API, known as "Manifest V3" (in reference to the manifest file contained within extensions). Manifest V3 is intended to modernize the extension architecture and improve the security and performance of the browser; it adopts declarative APIs to "decrease the need for overly-broad access and enable more performant implementation by the browser", replaces background pages with feature-limited "Service Workers" to reduce resource usage, and prohibits remotely-hosted code. Google faced criticism for this change since it limits the number of rules and types of expressions that may be checked by ad blockers. Additionally, the prohibition of remotely-hosted code will restrict the ability for ad-blocking filter lists to be updated independently of the extension itself. The JavaScript virtual machine used by Chrome, the V8 JavaScript engine, has features such as dynamic code generation, hidden class transitions, and precise garbage collection. In 2008, several websites performed benchmark tests using the SunSpider JavaScript Benchmark tool as well as Google's own set of computationally intense benchmarks, which include ray tracing and constraint solving. They unanimously reported that Chrome performed much faster than all competitors against which it had been tested, including Safari (for Windows), Firefox 3.0, Internet Explorer 7, Opera, and Internet Explorer 8. However, on October 11, 2010, independent tests of JavaScript performance, Chrome has been scoring just behind Opera's Presto engine since it was updated in version 10.5. On September 3, 2008, Mozilla responded by stating that their own TraceMonkey JavaScript engine (then in beta), was faster than Chrome's V8 engine in some tests. John Resig, Mozilla's JavaScript evangelist, further commented on the performance of different browsers on Google's own suite, commenting on Chrome's "decimating" of the other browsers, but he questioned whether Google's suite was representative of real programs. He stated that Firefox 3.0 performed poorly on recursion-intensive benchmarks, such as those of Google, because the Mozilla team had not implemented recursion-tracing yet. Two weeks after Chrome's launch in 2008, the WebKit team announced a new JavaScript engine, SquirrelFish Extreme, citing a 36% speed improvement over Chrome's V8 engine. Like most major web browsers, Chrome uses DNS prefetching to speed up website lookups, as do other browsers like Firefox, Safari, Internet Explorer (called DNS Pre-resolution), and in Opera as a UserScript (not built-in). Chrome formerly used their now-deprecated SPDY protocol instead of only HTTP when communicating with servers that support it, such as Google services, Facebook, Twitter. SPDY support was removed in Chrome version 51. This was due to SPDY being replaced by HTTP/2, a standard that was based upon it.[citation needed] In November 2019, Google said it was working on several "speed badging" systems that let visitors know why a page is taking time to show up. The variations include simple text warnings and more subtle signs that indicate a site is slow. No date has been given for when the badging system will be included with the Chrome browser. Chrome formerly supported a Data Saver feature for making pages load faster called Lite Mode. Previously, Chrome engineers Addy Osmani and Scott Little announced Lite Mode would automatically lazy-load images and iframes for faster page loads. Lite Mode was switched off in Chrome 100, citing a decrease in mobile data costs for many countries. Chrome periodically retrieves updates of two blacklists (one for phishing and one for malware), and warns users when they attempt to visit a site flagged as potentially harmful. This service is also made available for use by others via a free public API called "Google Safe Browsing API". Chrome uses a process-allocation model to sandbox tabs. Using the principle of least privilege, each tab process cannot interact with critical memory functions (e.g. OS memory, user files) or other tab processes – similar to Microsoft's "Protected Mode" used by Internet Explorer 9 or greater. The Sandbox Team is said to have "taken this existing process boundary and made it into a jail". This enforces a computer security model whereby there are two levels of multilevel security (user and sandbox) and the sandbox can only respond to communication requests initiated by the user. On Linux sandboxing uses the seccomp mode. In January 2015, TorrentFreak reported that using Chrome when connected to the internet using a VPN can be a serious security issue due to the browser's support for WebRTC. On September 9, 2016, it was reported that starting with Chrome 56, users will be warned when they visit insecure HTTP websites to encourage more sites to make the transition to HTTPS. On December 4, 2018, Google announced its Chrome 71 release with new security features, including a built-in ad-blocking system. In addition, Google also announced its plan to crack down on websites that make people involuntarily subscribe to mobile subscription plans. On September 2, 2020, with the release of Chrome 85, Google extended support for Secure DNS in Chrome for Android. DNS-over-HTTPS (DoH) was designed to improve safety and privacy while browsing the web. Under the update, Chrome automatically switches to DNS-over-HTTPS (DoH) if the current DNS provider supports the feature. Since 2008, Chrome has been faulted for not including a master password to prevent casual access to a user's passwords. Chrome developers have indicated that a master password does not provide real security against determined hackers and have refused to implement one. Bugs filed on this issue have been marked "WontFix". As of February 2014,[update] Google Chrome asks the user to enter their Windows account password before showing saved passwords. On Linux, Google Chrome/Chromium can store passwords in three ways: GNOME Keyring, KWallet, or plain text. Google Chrome/Chromium chooses which store to use automatically, based on the desktop environment in use. Passwords stored in GNOME Keyring or KWallet are encrypted on disk, and access to them is controlled by dedicated daemon software. Passwords stored in plain text are not encrypted. Because of this, when either GNOME Keyring or KWallet is in use, any unencrypted passwords that have been stored previously are automatically moved into the encrypted store. Support for using GNOME Keyring and KWallet was added in version 6, but using these (when available) was not made the default mode until version 12.[citation needed] As of version 45, the Google Chrome password manager is no longer integrated with Keychain, since the interoperability goal is no longer possible. No security vulnerabilities in Chrome were exploited in the three years of Pwn2Own from 2009 to 2011. At Pwn2Own 2012, Chrome was defeated by a French team who used zero day exploits in the version of Flash shipped with Chrome to take complete control of a fully patched 64-bit Windows 7 PC using a booby-trapped website that overcame Chrome's sandboxing. Chrome was compromised twice at the 2012 CanSecWest Pwnium. Google's official response to the exploits was delivered by Jason Kersey, who congratulated the researchers, noting "We also believe that both submissions are works of art and deserve wider sharing and recognition." Fixes for these vulnerabilities were deployed within 10 hours of the submission. A significant number of security vulnerabilities in Chrome occurred in the Adobe Flash Player. For example, the 2016 Pwn2Own successful attack on Chrome relied on four security vulnerabilities. Two of the vulnerabilities were in Flash, one was in Chrome, and one was in the Windows kernel. In 2016, Google announced that it was planning to phase out Flash Player in Chrome, starting in version 53. The first phase of the plan was to disable Flash for ads and "background analytics", with the ultimate goal of disabling it completely by the end of the year, except on specific sites that Google has deemed to be broken without it. Flash would then be re-enabled with the exclusion of ads and background analytics on a site-by-site basis. Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the United States Central Intelligence Agency, such as the ability to compromise web browsers (including Google Chrome). Google introduced download scanning protection in Chrome 17. In February 2018, Google introduced an ad blocking feature based on recommendations from the Interactive Advertising Bureau. Sites that employ invasive ads are given a 30-day warning, after which their ads will be blocked. Consumer Reports recommended users install dedicated ad-blocking tools instead, which offer increased security against malware and tracking. The private browsing feature called Incognito mode prevents the browser from locally storing any history information, cookies, site data, or form inputs. Downloaded files and bookmarks will be stored. In addition, user activity is not hidden from visited websites or the Internet service provider. Incognito mode is similar to the private browsing feature in other web browsers. It does not prevent saving in all windows: "You can switch between an incognito window and any regular windows you have open. You'll only be in incognito mode when you're using the incognito window". The iOS version of Chrome also supports the optional ability to lock incognito tabs with Face ID, Touch ID, or the device's passcode. In 2022, Google began to implement this feature into Android versions of Chrome. This feature is now available for Android 12 devices and above, assuming the hardware allows it. In 2024, Google agreed to destroy billions of records to settle a lawsuit claiming it secretly tracked the internet use of people who thought they were browsing privately in incognito mode. In February 2012, Google announced that Chrome would implement the Do Not Track (DNT) standard to inform websites of the user's desire not to be tracked. The protocol was implemented in version 23. In line with the W3's draft standard for DNT, it is turned off by default in Chrome. A multi-process architecture is implemented in Chrome where, by default, a separate process is allocated to each site instance and plugin. This procedure is termed process isolation, and raises security and stability by preventing tasks from interfering with each other. An attacker successfully gaining access to one application gains access to no others, and failure in one instance results in a Sad Tab screen of death, similar to the well-known Sad Mac, but only one tab crashes instead of the whole application. This strategy exacts a fixed per-process cost up front, but results in less memory bloat over time as fragmentation is confined to each instance and no longer needs further memory allocations. This architecture was later adopted in Safari and Firefox. Chrome includes a process management utility called Task Manager which lets users see what sites and plugins are using the most memory, downloading the most bytes and overusing the CPU and provides the ability to terminate them. Chrome Version 23 ensures its users an improved battery life for the systems supporting Chrome's GPU accelerated video decoding. The first production release on December 11, 2008, marked the end of the initial Beta test period and the beginning of production. Shortly thereafter, on January 8, 2009, Google announced an updated release system with three channels: Stable (corresponding to the traditional production), Beta, and Developer preview (also called the "Dev" channel). Where there were before only two channels: Beta and Developer, now there are three. Concurrently, all Developer channel users were moved to the Beta channel along with the promoted Developer release. Google explained that now the Developer channel builds would be less stable and polished than those from the initial Google Chrome Beta period. Beta users could opt back to the Developer channel as desired.[citation needed] Each channel has its own release cycle and stability level. The Stable channel is updated roughly quarterly, with features and fixes that passed "thorough" testing in the Beta channel. Beta is updated roughly monthly, with "stable and complete" features migrated from the Developer channel. The Developer channel is updated once or twice per week and was where ideas and features were first publicly exposed, "(and sometimes fail) and can be very unstable at times". [Quoted remarks from Google's policy announcements.] On July 22, 2010, Google announced it would ramp up the speed at which it releases new stable versions; the release cycles were shortened from quarterly to six weeks for major Stable updates. Beta channel releases now come roughly at the same rate as Stable releases, though approximately one month in advance, while Dev channel releases appear roughly once or twice weekly, allowing time for basic release-critical testing. This faster release cycle also brought a fourth channel: the "Canary" channel, updated daily from a build produced at 09:00 UTC from the most stable of the last 40 revisions. The name refers to the practice of using canaries in coal mines, so if a change "kills" Chrome Canary, it will be blocked from migrating down to the Developer channel, at least until fixed in a subsequent Canary build. Canary is "the most bleeding-edge official version of Chrome and somewhat of a mix between Chrome dev and the Chromium snapshot builds". Canary releases run side by side with any other channel; it is not linked to the other Google Chrome installation and can therefore run different synchronization profiles, themes, and browser preferences. This ensures that fallback functionality remains even when some Canary updates may contain release-breaking bugs. It does not natively include the option to be the default browser, although on Windows and macOS it can be set through System Preferences. Canary was Windows-only at first; a macOS version was released on May 3, 2011. The Chrome beta channel for Android was launched on January 10, 2013; like Canary, it runs side by side with the stable channel for Android. Chrome Dev for Android was launched on April 29, 2015. All Chrome channels are automatically distributed according to their respective release cycles. The mechanism differs by platform. On Windows, it uses Google Update, and auto-update can be controlled via Group Policy. Alternatively, users may download a standalone installer of a version of Chrome that does not auto-update. On macOS, it uses Google Update Service, and auto-update can be controlled via the macOS "defaults" system. On Linux, it lets the system's normal package management system supply the updates. This auto-updating behavior is a key difference from Chromium, the non-branded open-source browser which forms the core of Google Chrome. Because Chromium also serves as the pre-release development trunk for Chrome, its revisions are provided as source code, and buildable snapshots are produced continuously with each new commit, requiring users to manage their own browser updates. In March 2021, Google announced that starting with Chrome 94 in the third quarter of 2021, Google Chrome Stable releases will be made every four weeks, instead of six weeks as they have been since 2010. Also, Google announced a new release channel for system administrators and browser embedders with releases every eight weeks. Releases are identified by a four-part version number, e.g., 42.0.2311.90 (Windows Stable release April 14, 2015). The components are major.minor.build.patch. Chromium and Chrome release schedules are linked through Chromium (Major) version Branch Point dates, published annually. The Branch Points precede the final Chrome Developer build (initial) release by 4 days (nearly always) and the Chrome Stable initial release by roughly 53 days. Example: The version 42 Branch Point was February 20, 2015. Developer builds stopped advancing at build 2311 with release 42.0.2311.4 on February 24, 4 days later. The first Stable release, 42.0.2311.90, was April 14, 2015, 53 days after the Branch Point.[citation needed] Chrome supports color management by using the system-provided ICC v2 and v4 support on macOS, and from version 22 supports ICC v2 profiles by default on other platforms. In Chrome, when not connected to the Internet and an error message displaying "No internet" is shown, on the top, an "8-bit" Tyrannosaurus rex is shown, but when pressing the space bar on a keyboard, mouse-clicking on it or tapping it on touch devices, the T-Rex instantly jumps once and starts dashing across a cactus-ridden desert, revealing it to be an Easter egg in the form of a platform game. The game itself is an infinite runner, and there is no time limit in the game as it progresses faster and periodically tints to a black background. A school or enterprise manager can disable the game. Platforms The current version of Chrome runs on: As of April 2016,[update] stable 32-bit and 64-bit builds are available for Windows, with only 64-bit stable builds available for Linux and macOS. 64-bit Windows builds became available in the developer channel and as canary builds on June 3, 2014, in the beta channel on July 30, 2014, and in the stable channel on August 26, 2014. 64-bit macOS builds became available as canary builds on November 7, 2013, in the beta channel on October 9, 2014, and in the stable channel on November 18, 2014. Starting with the release of version 89, Chrome will only be supported on Intel/Intel x86 and AMD processors with the SSE3 instruction set. A beta version for Android 4.0 devices was launched on February 7, 2012, available for a limited number of countries from Google Play. Notable features: synchronization with desktop Chrome to provide the same bookmarks and view the same browser tabs, page pre-rendering, hardware acceleration. Many of the latest HTML5 features: almost all of the Web Platform's features: GPU-accelerated canvas, including CSS 3D Transforms, CSS animations, SVG, WebSocket (including binary messages), Dedicated Workers; it has overflow scroll support, strong HTML5 video support, and new capabilities such as IndexedDB, WebWorkers, Application Cache and the File APIs, date- and time-pickers, parts of the Media Capture API. Also supports mobile oriented features such as Device Orientation and Geolocation. Mobile customizations: swipe gesture tab switching, link preview allows zooming in on (multiple) links to ensure the desired one is clicked, font size boosting to ensure readability regardless of the zoom level. Features missing in the mobile version include sandboxed tabs, Safe Browsing, apps or extensions, Adobe Flash (now and in the future), Native Client, and the ability to export user data such a list of their opened tabs or their browsing history into portable local files. Development changes: remote debugging, part of the browser layer has been implemented in Java, communicating with the rest of the Chromium and WebKit code through Java Native Bindings. The code of Chrome for Android is a fork of the Chromium project. It is a priority to upstream most new and modified code to Chromium and WebKit to resolve the fork. The April 17, 2012, update included availability in 31 additional languages and in all countries where Google Play is available. A desktop version of a website can also be requested, as opposed to a mobile version. In addition, Android users can now add bookmarks to their Android home screens if they choose and decide which apps should handle links opened in Chrome. On June 27, 2012, Google Chrome for Android exited beta and became stable. Chrome 18.0.1026311, released on September 26, 2012, was the first version of Chrome for Android to support mobile devices based on Intel x86. Starting from version 25, the Chrome version for Android is aligned with the desktop version, and usually new stable releases are available at the same time for both the Android and the desktop versions. Google released a separate Chrome for Android beta channel on January 10, 2013, with version 25. As of 2013[update] a separate beta version of Chrome is available in the Google Play Store – it can run side by side with the stable release. Chrome is available on Apple's mobile iOS and iPadOS operating systems. Released in the Apple App Store on June 26, 2012, it supports the iPad, iPhone and formerly the iPod touch; the current version requires that the device has iOS 17.0 or greater or iPadOS 17.0 or greater installed. In accordance with Apple's requirements for browsers released through their App Store, this version of Chrome uses the iOS WebKit – which is Apple's own mobile rendering engine and components, developed for their Safari browser – therefore it is restricted from using Google's own V8 JavaScript engine. Chrome is the default web browser for the iOS and iPadOS Gmail application.[citation needed] In a review by Chitika, Chrome was noted as having 1.5% of the iOS web browser market as of July 18, 2012.[update] In October 2013, Chrome had 3% of the iOS browser market.[needs update] On Linux distributions, support for 32-bit Intel processors ended in March 2016, although Chromium is still supported. As of Chrome version 26, Linux installations of the browser may be updated only on systems that support GCC v4.6 and GTK v2.24 or later. Thus deprecated systems include (for example) Debian 6's 2.20, and RHEL 6's 2.18. Support for Google Chrome on Windows XP and Windows Vista ended in April 2016. The last release of Google Chrome that can be run on Windows XP and Vista was version 49.0.2623.112, released on April 7, 2016, then re-released on April 11, 2016. Support for Google Chrome on Windows 7 was originally supposed to end on July 15, 2021. However, the date was moved back to January 15, 2022, due to the ongoing COVID-19 pandemic. Since enterprises took more time to migrate to Windows 10 or 11, the end of support date was pushed back again until January 15, 2023. Support for not only Windows 7, but also Windows 8 and 8.1 ended on this date. The last version to support these versions of Windows is Chrome 109. "Windows 8 mode" was introduced in 2012 and has since been discontinued. It was provided to the developer channel, which enabled Windows 8 and 8.1 users to run Chrome with a full-screen, tablet-optimized interface, with access to snapping, sharing, and search functionalities. In October 2013, Windows 8 mode on the developer channel changed to use a desktop environment mimicking the interface of ChromeOS with a dedicated windowing system and taskbar for web apps. This was removed on version 49 and users that have upgraded to Windows 10 will lose this feature. Google dropped support for Mac OS X 10.5 with the release of Chrome 22. Support for 32-bit versions of Chrome ended in November 2014 with the release of Chrome 39. Support for Mac OS X 10.6, OS X 10.7, and OS X 10.8 ended in April 2016 with the release of Chrome 50. Support for OS X 10.9 ended in April 2018 with the release of Chrome 66. Support for OS X 10.10 ended in January 2021 with the release of Chrome 88. Support for OS X 10.11 and macOS 10.12 ended in August 2022 with the release of Chrome 104. Support for macOS 10.13 and macOS 10.14 ended in September 2023 with the release of Chrome 117. Support for macOS 10.15 ended in September 2024 with the release of Chrome 129. Support for macOS 11 ended in August 2025 with the release of Chrome 139. Google Chrome is the basis of Google's ChromeOS operating system that ships on specific hardware from Google's manufacturing partners. The user interface has a minimalist design resembling the Google Chrome browser. ChromeOS is aimed at users who spend most of their computer time on the Web; the only applications on the devices are a browser incorporating a media player and a file manager. Google announced ChromeOS on July 7, 2009. Reception Google Chrome was met with acclaim upon release. In 2008, Matthew Moore of The Daily Telegraph summarized the verdict of early reviewers: "Google Chrome is attractive, fast and has some impressive new features..." Initially, Microsoft reportedly played down the threat from Chrome and predicted that most people would embrace Internet Explorer 8. Opera Software said that "Chrome will strengthen the Web as the biggest application platform in the world". But by February 25, 2010, BusinessWeek had reported that "For the first time in years, energy and resources are being poured into browsers, the ubiquitous programs for accessing content on the Web. Credit for this trend – a boon to consumers – goes to two parties. The first is Google, whose big plans for the Chrome browser have shaken Microsoft out of its competitive torpor and forced the software giant to pay fresh attention to its own browser, Internet Explorer. Microsoft all but ceased efforts to enhance IE after it triumphed in the last browser war, sending Netscape to its doom. Now it's back in gear." Mozilla said that Chrome's introduction into the web browser market comes as "no real surprise", that "Chrome is not aimed at competing with Firefox", and furthermore that it would not affect Google's revenue relationship with Mozilla. Chrome's design bridges the gap between desktop and so-called "cloud computing." At the touch of a button, Chrome lets you make a desktop, Start menu, or QuickLaunch shortcut to any Web page or Web application, blurring the line between what's online and what's inside your PC. For example, I created a desktop shortcut for Google Maps. When you create a shortcut for a Web application, Chrome strips away all of the toolbars and tabs from the window, leaving you with something that feels much more like a desktop application than like a Web application or page. — PC World With its dominance in the web browser market, Google has been accused of using Chrome and Blink development to push new web standards that are proposed in-house by Google and subsequently implemented by its services first and foremost. These have led to performance disadvantages and compatibility issues with competing browsers, and in some cases, developers intentionally refusing to test their websites on any other browser than Chrome. Tom Warren of The Verge went as far as comparing Chrome to Internet Explorer 6, the default browser of Windows XP that was often targeted by competitors due to its similar ubiquity in the early 2000s. In 2021, computer scientist and lawyer Jonathan Mayer stated that Chrome has increasingly become an agent for Google LLC than a user agent, as it is "the only major web browser that lacks meaningful privacy protections by default, shoves users toward linking activity with a Google Account, and implements invasive new advertising capabilities." Criticism A class-action lawsuit seeking $5 billion in damages was filed against Google in 2020 because it misled consumers into thinking it would not track them when using incognito mode, despite using various means to do so. In December 2023, a settlement was reportedly agreed to, and a proposed settlement agreement was filed in federal court on April 1, 2024. It stated that Google would delete billions of browsing-data records, revise disclosures about data collection in Incognito mode, and allow users to block third-party cookies in Incognito for five years. However, the agreement did not include monetary damages and remained subject to court approval. In June 2015, the Debian developer community discovered that Chromium 43 and Chrome 43 were programmed to download the Hotword Shared Module, which could enable the OK Google voice recognition extension, although by default it was "off". This raised privacy concerns in the media. The module was removed in Chrome 45, which was released on September 1, 2015, and was only present in Chrome 43 and 44. Chrome sends details about its users and their activities to Google through both optional and non-optional user tracking mechanisms. Some of the tracking mechanisms can be optionally enabled and disabled through the installation interface and through the browser's options dialog. Unofficial builds, such as SRWare Iron, seek to remove these features from the browser altogether. The RLZ library, which is used to measure the success of marketing promotions, is not included in the Chromium browser either. In March 2010, Google devised a new method to collect installation statistics: the unique ID token included with Chrome is now used for only the first connection that Google Update makes to its server. The optional suggestion service included in Google Chrome has been criticized because it provides the information typed into the Omnibox to the search provider before the user even hits return. This allows the search engine to provide URL suggestions, but also provides them with web use information tied to an IP address. Chrome previously was able to suggest similar pages when a page could not be found. For this, in some cases, Google servers were contacted. The feature has since been removed. A 2019 review by Washington Post technology columnist Geoffrey A. Fowler found that in a typical week of browsing, Chrome allowed thousands more cookies to be stored than Mozilla Firefox. Fowler pointed out that because of its advertising businesses, despite the privacy controls it offers users, Google is a major producer of third-party cookies and has a financial interest in collecting user data; he recommended switching to Firefox, Apple Safari, or Chromium-based Brave. On installation Can be disabled in ChromeOS. For Chrome browsers running in all other operating systems: In 2023, Google proposed a technology that claims to "hide the IP and traffic of its users" by routing Chrome traffic to Google servers. This has drawn criticism as all traffic is readily available for Google to use. Also tied with Google is its advertising business, which, given the vast market share of Chrome, sought to introduce features that protect this revenue stream, mainly the introduction of a cookie-tracking alternative named Federated Learning of Cohorts (FLoC), which evolved into Topics, and Manifest V3 API changes for extensions. In January 2021, Google stated it was making progress on developing privacy-friendly alternatives that would replace third-party cookies currently being used by advertisers and companies to track browsing habits. Google then promised to phase out the use of cookies in its web browser in 2022, implementing its FLoC technology instead. The announcement triggered antitrust concerns from multiple countries for abusing the Chrome browser's market monopoly, with the U.K.'s Competition and Markets Authority and the European Commission both opening formal probes. The FLoC proposal also drew criticism from DuckDuckGo, Brave, and the Electronic Frontier Foundation for underestimating the ability of the API to track users online. On January 25, 2022, Google announced it had killed off development of its FLoC technologies and proposed the new Topics API to replace it. Topics is similarly intended to replace cookies, using one's weekly web activity to determine a set of five interests. Topics are supposed to refresh every three weeks, changing the type of ads served to the user and not retaining the gathered data. Manifest V3 has faced criticism for changes to the WebRequest API used by ad blocking and privacy extensions to block and modify network connections. The declarative version of WebRequest uses rules processed by the browser, rather than sending all network traffic through the extension, which Google stated would improve performance. However, DeclarativeWebRequest is limited in the number of rules that may be set, and the types of expressions that may be used. Additionally, the prohibition of remotely-hosted code will restrict the ability for filter lists to be updated independently of the extension itself. As the Chrome Web Store review process has an invariable length, filter lists may not be updated in a timely fashion. Google has been accused of using Manifest V3 to inhibit ad-blocking software due to its vested interest in the online advertising market. Google cited performance issues associated with WebRequest, as well as its use in malicious extensions. In June 2019, it announced that it would increase the aforementioned cap from 30,000 to 150,000 entries to help quell concerns about limitations to filtering rules. In 2021, the Electronic Frontier Foundation (EFF) issued a statement that Manifest V3 was "outright harmful to privacy efforts", as it would greatly limit the functionality of ad blocking extensions. In December 2022, Google announced the transition would be paused "to address developer feedback and deliver better solutions to migration issues". In November 2023, Google announced it would resume the transition to Manifest V3; support for Manifest V2 extensions would be removed entirely from non-stable builds of Chrome beginning June 2024. Google removed the extensions using MV2 from their Chrome Web Store in June 2025. The changes also affected other Chromium-based web browsers, including Microsoft Edge, Brave, Opera and Vivaldi. However, Microsoft announced that it would support MV2 adblockers. Brave announced that it would support MV2 extensions as long as possible, but the lack of its own extension website made this difficult (Brave uses the Chrome Web Store). Ultimately, Microsoft retained some MV2 extensions in its store, while Brave created a separate section allowing downloads of AdGuard MV2, uBlock Origin (uBO), uMatrix MV2, and NoScript MV2. Mozilla stated that Manifest V3 support is being added to Mozilla Firefox's implementation of Chrome's extension API (WebExtensions) for compatibility reasons, but Mozilla has also stated that its implementation would not contain limitations that affect privacy and content-blocking extensions, and that its implementation of V2 would not be deprecated. Some extension developers have released separate editions of their extensions for MV3, of which some are significantly restricted or rewritten. For example, uBO was released as uBlock Origin Lite (uBOL). It has a static filter list, which means a slower response to threats and not including some filters for smaller or regional websites, does not use so-called cosmetic filters by default (hiding elements on the page), and generally contains significantly fewer filters. Restrictions were also introduced in Ghostry for MV3, Adblock Plus MV3, AdGuard MV3, and Stands MV3. Researchers from Goethe University Frankfurt, examining request blocking in popular MV3 extensions, found that the providers of the tested extensions managed to bypass most of Google's restrictions. Current versions of the tested MV3 extensions were found blocking as many or even more requests to trackers and ads then their MV2 counterparts (quantitative study). However, the researchers also found that cosmetic filters were over 20% less effective in the tested sample (visual study). In August 2024, a federal judge in Washington, D.C. ruled that Google maintained an illegal monopoly over search services. In November 2024, the US Department of Justice (DOJ) demanded that Google sell Chrome to stop Google from maintaining its monopoly in online search. On August 12, 2025, artificial intelligence company Perplexity AI made a bid to buy the browser from Google for $34.5 billion. Perplexity stated that the sale could remedy anti-trust litigation against Google, in which a judge was considering compelling the sale of Chrome. Usage Chrome overtook Firefox in November 2011 in worldwide usage. As of January 2026,[update] according to StatCounter, Google Chrome had 71% worldwide global usage share, and according to Cloudflare, it was 68%, making it the most widely used web browser.[citation needed] It was reported by StatCounter, a web analytics company, that for the single day of Sunday, March 18, 2012, Chrome was the most used web browser in the world for the first time. Chrome secured 32.7% of the global web browsing on that day, while Internet Explorer followed closely behind with 32.5%. From May 14–21, 2012, Google Chrome was for the first time responsible for more Internet traffic than Microsoft's Internet Explorer, which had long held its spot as the most used web browser in the world. According to StatCounter, 31.88% of web traffic was generated by Chrome for a sustained period of one week and 31.47% by Internet Explorer. Though Chrome had topped Internet Explorer for a single day's usage in the past, this was the first time it had led for one full week. At the 2012 Google I/O developers' conference, Google claimed that there were 310 million active users of Chrome, almost double the number in 2011, which was stated as 160 million active users. In June 2013, according to StatCounter, Chrome overtook Internet Explorer for the first time in the US. In August 2013, Chrome was used by 43% of internet users worldwide. This study was done by Statista, which also noted that in North America, 36% of people use Chrome, the lowest in the world. In December 2010, Google announced that to make it easier for businesses to use Chrome, they would provide an official Chrome MSI package. For business use, it is helpful to have full-fledged MSI packages that can be customized via transform files (.mst) – , but the MSI provided with Chrome is only a very limited MSI wrapper fitted around the normal installer, and many businesses find that this arrangement does not meet their needs. The normal downloaded Chrome installer puts the browser in the user's local app data directory and provides invisible background updates, but the MSI package will allow installation at the system level, providing system administrators control over the update process – it was formerly possible only when Chrome was installed using Google Pack. Google also created group policy objects to fine-tune the behavior of Chrome in the business environment, for example, by setting automatic update intervals, disabling auto-updates, and configuring a home page. Until version 24 the software is known not to be ready for enterprise deployments with roaming profiles or Terminal Server/Citrix environments. In 2010, Google first started supporting Chrome in enterprise environments by providing an MSI wrapper around the Chrome installer. Google starting providing group policy objects, with more added each release, and today there are more than 500 policies available to control Chrome's behavior in enterprise environments. In 2016, Google launched Chrome Browser Enterprise Support, a paid service enabling IT admins to access Google experts to support their browser deployment. In 2019, Google launched Chrome Browser Cloud Management, a dashboard that gives business IT managers the ability to control content accessibility, app usage and browser extensions installed on its deployed computers. In September 2008, Google released a large portion of Chrome's source code as an open-source project called Chromium. This move enabled third-party developers to study the underlying source code and to help port the browser to the macOS and Linux operating systems. The Google-authored portion of Chromium is released under the permissive BSD license. Other portions of the source code are subject to a variety of open-source licenses. Chromium is similar to Chrome, but lacks built-in automatic updates and a built-in Flash player, as well as Google branding and has a blue-colored logo instead of the multicolored Google logo. Chromium does not implement user RLZ tracking. Initially, the Google Chrome PDF viewer, PDFium, was excluded from Chromium, but was later made open-source in May 2014. PDFium can be used to fill PDF forms. Developing for Chrome It is possible to develop applications, extensions, and themes for Chrome. They are zipped in a .crx file and contain a manifest.json file that specifies basic information (such as version, name, description, privileges, etc.), and other files for the user interface (icons, popups, etc.). Google has an official developer's guide on how to create, develop, and publish projects. Chrome has its own web store where users and developers can upload and download these applications and extensions. Impersonation by malware As with Microsoft Internet Explorer, the popularity of Google Chrome has led to the appearance of malware abusing its name. In late 2015, an adware replica of Chrome named "eFast" appeared, which would usurp the Google Chrome installation and hijack file type associations to make shortcuts for common file types and communication protocols link to itself, and inject advertisements into web pages. Its similar-looking icon was intended to deceive users. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-172] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Safari_(web_browser)] | [TOKENS: 7268]
Contents Safari (web browser) Safari is a web browser developed by Apple. It is built into several of Apple's operating systems, including macOS, iOS, iPadOS, and visionOS, and uses Apple's open-source browser engine WebKit, which was derived from KHTML. Safari was introduced in an update to Mac OS X Jaguar in January 2003, and made the default web browser with the release of Mac OS X Panther that same year. It has been included with the iPhone since the first-generation iPhone in 2007. At that time, Safari was the fastest browser on the Mac. Between 2007 and 2012, Apple maintained a Windows version, but abandoned it due to low market share. In 2010, Safari 5 introduced a reader mode, extensions, and developer tools. Safari 11, released in 2017, added Intelligent Tracking Prevention, which uses artificial intelligence to block web tracking. Safari 13 added support for Apple Pay, and authentication with FIDO2 security keys. Its user interface was redesigned in Safari 15, Safari 18, and Safari 26. History and development Netscape Navigator rapidly became the dominant Mac browser after its 1994 release, and eventually came bundled with Mac OS. In 1996, Microsoft released Internet Explorer for Mac (IE), and Apple released the Cyberdog internet suite, which included a web browser. In 1997, Apple shelved Cyberdog and reached a five-year agreement with Microsoft to make IE the default browser on the Mac, starting with Mac OS 8.1. Netscape continued to be preinstalled on all Macintosh systems. Microsoft continued to update IE for Mac, which was ported to Mac OS X DP4 in May 2000. Apple introduced the Safari web browser on January 7, 2003. At the time, Steve Jobs called Safari "a turbo browser for Mac OS X." Apple created Safari for speed, calling it the fastest browser for the Mac. Jobs compared it to Internet Explorer, Netscape, and Chimera (later renamed Camino), showing that Safari was faster. The second reason that Apple created Safari was to innovate; Apple wanted to make the best browser ever. During development, several codenames were used, including "Freedom", "iBrowse", and "Alexander" (a reference to conqueror Alexander the Great, an homage to the Konqueror web browser). On January 7, 2003, at Macworld San Francisco, Apple CEO Steve Jobs announced Safari that was based on WebKit, the company's internal fork of the KHTML browser engine. Apple released the first beta version exclusively on Mac OS X the same day. After that date, several official and unofficial beta versions followed until version 1.0 was released on June 23, 2003. On Mac OS X v10.3, Safari was pre-installed as the system's default browser, rather than requiring a manual download, as was the case with the previous Mac OS X versions. Safari's predecessor, the Internet Explorer for Mac, was then included in 10.3 as an alternative. In April 2005, Engineer Dave Hyatt fixed several bugs in Safari. His experimental beta passed the Acid2 rendering test on April 27, 2005, marking it the first browser to do so. Safari 2.0 which was released on April 29, 2005, was the sole browser Mac OS X 10.4 offered by default. Apple touted this version as it was capable of running a 1.8x speed boost compared to version 1.2.4, but it did not yet feature the Acid2 bug fixes. These major changes were initially unavailable for end-users unless they privately installed and compiled the WebKit source code or ran one of the nightly automated builds available at OpenDarwin. Version 2.0.2, released on October 31, 2005, finally included the Acid2 bug fixes. In June 2005, in response to KHTML criticisms over the lack of access to change logs, Apple moved the development source code and bug tracking of WebCore and JavaScriptCore to OpenDarwin. They have also open-sourced WebKit. The source code is for non-renderer aspects of the browser, such as its GUI elements and the remaining proprietary. The final stable version of Safari 2 and the last version released exclusively with Mac OS X, Safari 2.0.4, was updated on January 10, 2006, for Mac OS X. It was only available within Mac OS X Update 10.4.4, and it delivered fixes to layout and CPU usage issues among other improvements. On January 9, 2007, at Macworld San Francisco, Jobs unveiled that Safari 3 was ported to the newly introduced iPhone within iPhone OS (later called iOS). The mobile version was capable of displaying full, desktop-class websites. At WWDC 2007, Jobs announced Safari 3 for Mac OS X 10.5, Windows XP, and Windows Vista. He ran a benchmark based on the iBench browser test suite comparing the most popular Windows browsers, and claimed that Safari had the fastest performance. His claim was later examined by a third-party site called Web Performance over HTTP load times. They verified that Safari 3 was indeed the fastest browser on the Windows platform in terms of initial data loading over the Internet, though it was only negligibly faster than Internet Explorer 7 and Mozilla Firefox when it came to static content from the local cache. The initial Safari 3 beta version for Windows, released on the same day as its announcement at WWDC 2007, contained several bugs and a zero day exploit that allowed remote code executions. The issues were then fixed by Apple three days later on June 14, 2007, in version 3.0.1. On June 22, 2007, Apple released Safari 3.0.2 to address some bugs, performance problems, and other security issues. Safari 3.0.2 for Windows handled some fonts that were missing in the browser but already installed on Windows computers, such as Tahoma, Trebuchet MS, and others. The iPhone was previously released on June 29, 2007, with a version of Safari based on the same WebKit rendering engine as the desktop version but with a modified feature set better suited for a mobile device. The version number of Safari as reported in its user agent string is 3.0 was in line along with the contemporary desktop editions. The first stable, non-beta version of Safari for Windows, Safari 3.1, was offered as a free download on March 18, 2008. In June 2008, Apple released version 3.1.2, which addressed a security vulnerability in the Windows version where visiting a malicious web site could force a download of executable files and execute them on the user's desktop. Safari 3.2, released on November 13, 2008, introduced anti-phishing features using Google Safe Browsing and Extended Validation Certificate support. The final version of Safari 3 was version 3.2.3, which was released on May 12, 2009, with security improvements. Safari 4 was released on June 8, 2009. It was the first version that had completely passed the Acid3 rendering test, as well as the first version to support HTML5. It incorporated WebKit JavaScript engine SquirrelFish that significantly enhanced the browser's script interpretation performances by 29.9x. SquirrelFish was later evolved to SquirrelFish Extreme, later also marketed as Nitro, which had 63.6x faster performance. A public beta of Safari 4 was experimented on February 24, 2009. Safari 4 relied on Cover Flow to run the History and Bookmarks, and it featured Speculative Loading that automatically pre-loaded document information that is required to visit a particular website. The top sites can be displayed up to 24 thumbnails based on the frequently visited sites in a startup. The desktop version of Safari 4 included a redesign similar to that of the iPhone. The update also commissioned many developer tool improvements, including Web Inspectors, CSS element viewings, JavaScript debuggers and profilers, offline tables, database management, SQL support, and resource graphs. In addition to CSS retouching effects, CSS canvas, and HTML5 content. It replaced the initial Mac OS X-like interface with native Windows themes on Windows, using native font renderings. Safari 4.0.1 was released for Mac on June 17, 2009, and fixed Faces bugs in iPhoto '09. Safari 4 in Mac OS X v10.6 "Snow Leopard" has built-in 64-bit support, which makes JavaScript load up to 50% faster. It also has native crash resistances that would maintain it intact if a plugin like Flash player crashes, though other tabs or windows would not be affected. Safari 4.0.4, the final version which was released on November 11, 2009, for both Mac and Windows, which further improved the JavaScript performances. Safari 5 was released on June 7, 2010, and featured a less distracting reader view, and had a 30x faster JavaScript performances. It incorporated numerous developer tool improvements, including HTML5 interoperability and accessibility to secure extensions. The progress bar was re-added in this version as well. Safari 5.0.1 enabled the Extensions PrefPane by default, rather than requiring users to manually set it in the Debug menu. Version 5.1.7 was the final version for Windows. While no longer available from Apple, this release can still be downloaded from the Wayback Machine and is still functional on Windows 11. Apple exclusively released Safari 4.1 concurrently with Safari 5 for Mac OS X Tiger. It included many features that were found in Safari 5, though it excluded the Safari Reader and Safari Extensions. Apple released Safari 5.1 for both Windows and Mac on July 20, 2011, for Mac OS X 10.7 Lion; it was faster than Safari 5.0, and included the new Reading List feature. The company simultaneously announced Safari 5.0.6 in late June 2010 for Mac OS X 10.5 Leopard, though the new functions were excluded from Leopard users. Several HTML5 features were provided in Safari 5. It added supports for full-screen video, closed caption, geolocation, EventSource, and a now obsolete early variant of the WebSocket protocol. The fifth major version of Safari added supports for Full-text search, and a new search engine, Bing. Safari 5 supported Reader, which displays web pages in a continuous view, without advertisements. Safari 5 supported a smarter address field and DNS prefetching that automatically found links and looked up addresses on the web. New web pages loaded faster using Domain Name System (DNS) prefetching. The Windows version received an extra update on graphics acceleration as well. The blue inline progress bar was returned to the address bar, in addition to the spinning bezel and loading indicator introduced in Safari 4. Top Sites view now has a button to switch to Full History Search. Other features included Extension Builder for developers of Safari Extensions. Other changes included an improved inspector. Safari 5 supports Extensions, add-ons that customize the web browsing experience. Extensions are built using web standards such as HTML5, CSS3, and JavaScript. Safari 6.0 was previously referred to as Safari 5.2 until Apple changed the version number at WWDC 2012. The stable release of Safari 6 coincided with the release of OS X Mountain Lion on July 25, 2012, and was integrated within OS. As a result, it was no longer available for download from Apple's website or any other sources. Apple released Safari 6 via Software Update for users of OS X Lion. It was not released for OS X versions before Lion or for Windows. The company later quietly removed references and links for the Windows version of Safari 5. Microsoft had also removed Safari from its browser-choice page. On June 11, 2012, Apple released a developer preview of Safari 6.0 with a feature called iCloud Tabs, which syncs with open tabs on any iOS or other OS X device that runs the latest software. It updated new privacy features, including an "Ask websites not to track me" preference and the ability for websites to send OS X 10.8 Mountain Lion users notifications, though it removed RSS support. Safari 6 had the Share Sheets capability in OS X Mountain Lion. The Share Sheet options were: Add to Reading List, Add Bookmark, Email this Page, Message, Twitter, and Facebook. Tabs with full-page previews were added, too. The sixth major version of Safari, it added options to allow pages to be shared with other users via email, Messages, Twitter, and Facebook, as well as making some minor performance improvements. It added supports for -webkit-calc() in CSS. Additionally, various features were removed, including Activity Window, a separate Download Window, direct support for RSS feeds in the URL field, and bookmarks. The separate search field and the address bar were also no longer available as a toolbar configuration option. Instead, it was replaced by the smart search field, a combination of the address bar and the search field. Safari 7 was announced at WWDC 2013, and it brought a number of JavaScript performance improvements. It made use of Top Site and Sidebar, Shared Links, and Power Saver, which paused unused plugins. Safari 7 for OS X Mavericks and Safari 6.1 for Lion and Mountain Lion were all released along with OS X Mavericks in the special event on October 22, 2013. Safari 8 was announced at WWDC 2014 and was released for OS X Yosemite. It included the JavaScript API WebGL, stronger privacy management, improved iCloud integration, and a redesigned interface. It was also faster and more efficient, with additional developer features including JavaScript Promises, CSS Shapes & Composting mark up, IndexedDB, Encrypted Media Extensions, and SPDY protocol. Safari 9 was announced in WWDC 2015 and was shipped with OS X El Capitan. New features included audio muting, more options for Safari Reader, and improved autofill. It was not fully available for the previous OS X Yosemite. Safari 10 was shipped with macOS Sierra and released for OS X Yosemite and OS X El Capitan on September 20, 2016. It had a redesigned Bookmark and History views, and double-clicking will centralized focus on a particular folder. The update redirected Safari extensions to be saved directly to Pocket and Dic Go. Software improvements included Autofill quality from the Contrast card and Web Inspector Timelines Tab, in-line sub-headlines, bylines, and publish dates. This version tracks and re-applies zoomed level to websites, and legacy plug-ins were disabled by default in favor of HTML5 versions of websites. Recently closed tabs can be reopened via the History menu, or by holding the "+" button in the tab bar, and using Shift-Command-T. When a link opens in a new tab; it is now possible to hit the back button or swipe to close it and go back to the original tab. Debugging is now supported on the Web Inspector. Safari 10 also includes several security updates, including fixes for six WebKit vulnerabilities and issues related to Reader and Tabs. The first version of Safari 10 was released on September 20, 2016, and the last version (10.1.2) was released on July 19, 2017. Safari 11 was released on September 19, 2017, for OS X El Capitan and macOS Sierra, ahead of macOS High Sierra's release. It was included with High Sierra. Safari 11 included several new features such as Intelligent Tracking Prevention which aimed to prevent cross-site tracking by placing limitations on cookies and other website data. Intelligent Tracking Prevention allowed first-party cookies to continue track the browser history, though with time limits. For example, first-party cookies from ad-tech companies such as Google/Alphabet Inc., were set to expire in 24-hours after the visit. Safari 12 was released for macOS Mojave on September 24, 2018. It was also available to macOS Sierra and macOS High Sierra on September 17, 2018. Safari 12 included several new features such as Icons in tabs, Automatic Strong Passwords, and Intelligent Tracking Prevention 2.0. Safari version 12.0.1 was released on October 30, 2018, within macOS Mojave 10.14.1, and Safari 12.0.2 was released on December 5, 2018, under macOS 10.14.2. Support for developer-signed classic Safari Extensions has been dropped. This version would also be the last that supported the official Extensions Gallery. Apple also encouraged extension authors to switch to Safari App Extensions, which triggered negative feedback from the community. Safari 13 was announced at WWDC 2019 on June 3, 2019. Safari 13 included several new features, such as prompting users to change weak passwords, FIDO2 USB security key authentication support, Sign in with Apple support, Apple Pay on the Web support, and increased speed and security. Safari 13 was released on September 20, 2019, on macOS Mojave and macOS High Sierra, and later shipped with macOS Catalina. In June 2020 it was announced that macOS Big Sur will include Safari 14. According to Apple, Safari 14 is more than 50% faster than Google Chrome. Safari 14 introduced new privacy features, including Privacy Report, which shows blocked content and privacy information on web pages. Users will also receive a monthly report on trackers that Safari has blocked. Extensions can also be enabled or disabled on a site-by-site basis. Safari 14 introduced partial support for the WebExtension API used in Google Chrome, Microsoft Edge, Firefox, and Opera, making it easier for developers to port their extensions from those web browsers to Safari. Support for Adobe Flash Player will also be dropped from Safari, 3 months ahead of its end-of-life. A built-in translation service allows translation of a page to another language. Safari 14 was released as a standalone update to macOS Catalina and Mojave users on September 16, 2020. It added Ecosia as a supported search engine. Safari 15 was released for iOS 15, iPadOS 15, macOS Big Sur and macOS Catalina on September 20, 2021, and later shipped with macOS Monterey. It featured a redesigned interface and tab groups that blended better into the background. There were also a new home page and extension support on the iOS and iPadOS editions. Starting with this update, Safari versions would support iOS and iPadOS, ending the iOS version of separate updates. Safari 16 was released for iOS 16, macOS Monterey, and macOS Big Sur on September 12, 2022, and later shipped with macOS Ventura and iPadOS 16. Safari 16 added support for non-animated AVIF and contains several bug fixes and feature polishing. Safari 16 also includes shared tab groups, vertical tab support, website settings synchronization between devices connected to a same iCloud account, the ability to add backgrounds for a start page, new languages for built-in translation, built-in image translation, and new options to edit strong passwords. iOS 16.4 also introduced Web Push notifications. Safari 17 was released in September 2023 with iOS 17, iPadOS 17, and macOS Sonoma. It includes a feature named "Profiles", which allows users to separate their browsing sessions for different use cases. Every profile has a separate favorites bar, navigation history, extensions, tab groups, and cookies. Just like iOS 16.4, Safari 17 introduces web apps that can be added to the dock. Cookies are copied into web apps so that users stay logged into the web app if they are already in Safari. Safari can also now read pages with a new option in the navigation bar menu. New privacy features include locked private browsing when not in use, tracking-free URLs, private relay based on the country's location and time, instead of general position. Safari has also been adapted to Vision Pro with a new spatial UI, and Apple has redesigned the Develop menu for web developers. Safari 17 added AV1 hardware decoding support for devices with hardware decoding support. Safari 18 was released in September 2024 with iOS 18, iPadOS 18 and macOS Sequoia, and for the first time, visionOS 2. Like Safari 15, it redesigns the interface, but it is less significant and is mainly applied to the start page and reader mode (which is now called Reader). A new feature, AI-powered "Highlights," has been introduced, which will automatically detect relevant information on a page and highlight it as you browse. Other new features include faster loading times and a redesigned unified menu which is now on all versions of the browser; previously, it was exclusive to iOS and iPadOS along with the compact mode on macOS. Safari 26 was released in September 2025 with iOS 26, iPadOS 26, macOS Tahoe, and visionOS 26. The browser has been revamped using the Liquid Glass design language and now has an optional compact layout on iOS; the compact layout on iPadOS and macOS has been removed. Like the operating systems, Safari's version number is now based on the calendar year following its initial release. Starting iOS 15 and iPadOS 15, Safari would now ship the same features as the macOS version, which also included the name of the updates, ending the separate iOS version. Safari Technology Preview was first released alongside OS X El Capitan 10.11.4. Safari Technology Preview releases include the latest version of WebKit, which includes Web technologies in the future stable releases of Safari so that developers and users can install the Technology Preview release on a Mac, test those features, and provide feedback. The Safari Developer Program was a program dedicated to in-browser extension and HTML developers. It allowed members to write and distribute extensions for Safari through the Safari Extensions Gallery. It was initially free until it was incorporated into the Apple Developer Program in WWDC 2015, which costs $99 a year. The charges prompted frustrations from developers.[citation needed] Within OS X El Capitan, Apple implemented the Secure Extension Distribution to further improve its security, and it automatically updated all extensions within the Safari Extensions Gallery. Features Until Safari 6.0, it included a built-in web feed aggregator that supported the RSS and Atom standards. Current features included Private Browsing (a mode in which the browser retains no record of information about the user's web activity), the ability to archive web content in WebArchive format, the ability to email complete web pages directly from a browser menu, the ability to search bookmarks, and the ability to share tabs between all Mac and iOS devices running appropriate versions of software via an iCloud account. In Safari's early years, it pioneered several HTML5 features that are now standard, such as the Canvas API. In 2015, Safari was criticized for failing to keep pace with some modern web technologies. In September 2017, Apple announced that it would use artificial intelligence (AI) to reduce the ability of advertisers to track Safari users as they browse the web. Cookies used for tracking will be allowed for 24 hours, then disabled, unless the AI system judges that the user wants to keep the cookie. Major advertising groups objected, saying it will reduce the free services supported by advertising, while other experts praised the change. Apple used a remotely updated plug-in blacklist to prevent potentially dangerous or vulnerable plugins from running on Safari. Initially, Flash and Java content were blocked on some early versions of Safari. Since Safari 12, support for NPAPI plugins (except for Flash) has been completely dropped. Safari 14 finally dropped support for Adobe Flash Player. Beginning in 2018, Apple made technical changes to Safari's content blocking functionality which prompted backlash from users and developers of ad blocking extensions, who said the changes made it impossible to offer a similar level of user protection found in other browsers. Internally, the update limited the number of blocking rules which could be applied by third-party extensions, preventing the full implementation of community-developed blocklists. In response, several developers of popular ad and tracking blockers announced their products were being discontinued, as they were now incompatible with Safari's newly limited content blocking features. Beginning with Safari 13, popular extensions such as uBlock Origin no longer work with Safari. Safari can sync bookmarks, history, reading list, and tabs through iCloud. This happens by default if a user's Mac, iPhone, or iPad is logged in to iCloud, but syncing can be disabled in the Settings app (on iOS and iPadOS) or System Settings (on Mac).[citation needed] iCloud Tabs lets users see a list of their other devices' open tabs that have not been added to a tab group. On iOS and iPadOS, these iCloud Tabs are shown below the grid of open tabs. On the Mac, they are shown at the bottom of the Tab Overview, or in an optional iCloud Tabs toolbar item.[citation needed] Safari 15 added tab groups. These tab groups, and the tabs they contain, can be synced across devices; when a tab is opened in a tab group on one device, it is added to that tab group on all devices, without needing to manually open it through iCloud Tabs.[citation needed] macOS Ventura added Shared Tab Groups, which can be shared through iMessage. New tabs and closed tabs will sync for all participants, and a small thumbnail with users' profile pictures will be visible on the tab they are currently viewing. Safari supports the Handoff feature, which allows users to continue where they left off on another device. The Safari sidebar was introduced in Safari 8 as a way to access Bookmarks, Reading List, and Shared Tabs. The sidebar got its biggest update in Safari 16, when it added support for vertical tabs. This allows users to see their tabs arranged vertically in addition to the horizontal tab view in the top Toolbar. This feature allows users to quickly learn more about landmarks, works of art, and more by selecting an image or a photo. Users can also easily lift the subject of an image from Safari, remove its background, and paste it into other apps like Messages and Notes. Live Text enables users to interact with text within any image or paused video, allowing functionalities such as copying, translating, or looking up text without leaving Safari. Safari's translation feature now allows for instant translation of entire web pages and supports text in images and paused video, broadening its multilingual capabilities. The Quick Note feature lets users capture thoughts or jot down ideas while browsing, directly within Safari. This functionality integrates with the Notes app, providing a streamlined way to save and manage notes. Safari now supports Passkeys, a password-less authentication method that provides end-to-end encryption for login credentials. Passkeys sync securely across devices via iCloud Keychain and offer protection against phishing and data leaks. A new feature powered by machine learning, Highlights automatically surfaces contextual information like summaries, quick links, and related content based on web activity. This makes it easier to discover additional content without leaving the page. Distraction Control lets users hide specific elements on a webpage that might be visually disruptive, allowing for a cleaner browsing experience and improved focus on the content. Safari now removes tracking parameters from shared URLs, preventing third-party sites from tracking the user's navigation behavior. This feature is enabled by default in Messages, Mail, and Private Browsing mode. Safari utilizes the WebKit engine to render HTML and execute JavaScript. In 2005, Safari 2.0 became the first browser to pass the Acid2 rendering test, verifying its adherence to CSS and HTML standards. Modern versions support advanced web technologies, including WebAssembly, WebGPU, and the WebExtensions API, the latter of which allows for cross-browser extension compatibility. Architecture On macOS, Safari is a Cocoa application. It uses Apple's WebKit for rendering web pages and running JavaScript. WebKit consists of WebCore (based on Konqueror's KHTML engine) and JavaScriptCore (originally based on KDE's JavaScript engine, named KJS). Like KHTML and KJS, WebCore and JavaScriptCore are free software and released under the terms of the GNU Lesser General Public License. Some Apple improvements to the KHTML code were merged back into the Konqueror project. Apple has also released some additional codes under the open source 2-clause BSD-like license. The version of Safari included in Mac OS X v10.6 (and later versions) is compiled for 64-bit architecture. Apple claimed that running Safari in 64-bit mode would increase rendering speeds by up to 50%. WebKit2 has a multiprocess API for WebKit, where the web content is handled by a separate process from the application using WebKit. Apple announced WebKit2 in April 2010. Safari for OS X switched to the WebKit2 API with version 5.1. Safari for iOS switched to WebKit2 with iOS 8. Safari has support for WebAssembly (Wasm), including some extensions, but one of them WasmGC has been off by default, since Safari Technology Preview 167, in 2023 and not having it (by default in Safari) has been blocking cross-platform software, many programming languages need it enabled in practice. It was still off by default in Safari 18, but changed to enabled on August 8, 2024, by the developers (so will likely be on in Safari 19, if not sooner, catching up with other web browsers like Google's Chrome on e.g. Android). Other platforms Safari for iPhone was released along with the original iPhone. It was well received at the time of release, with news outlets calling it "far superior" to other mobile browsers at the time. Safari has also been available for iPadOS since its split from the main iOS operating system. With the release of iPadOS 13, Safari for iPad's user agent was changed to present itself to websites as Safari for Mac and shows the desktop version of websites, except in the miniature Slide Over multitasking view. Apple improved multitouch compatibility for desktop websites through several tweaks to the WebKit engine, for example, with heuristics to determine whether to translate a tap into a hover or a click. The iPadOS version also gained a download manager, support for Media Source Extensions to allow users to watch Netflix in Safari, and support for the custom keyboard shortcuts in web apps like Gmail, which override Safari's own keyboard shortcuts. External webcam support for websites was later added also. The browser has continued to receive updates with new releases of iOS, such as the addition of browsing profiles for different use cases with iOS 17, and a locked private browsing feature. iOS 15 added support for third-party browser extensions, which can be downloaded and installed through corresponding apps via the App Store. Extensions available included VPNs and content blockers. Universal extensions that also worked with the Mac version of Safari can be created via a WebExtensions API. A Safari version for visionOS was released with the launch of the Apple Vision Pro headset in 2024, with features specific to the platform, such as moving browser windows around in virtual space. The Verge said it was the headset's "killer app" at launch, due to its versatility and potential for web experiences. Safari for Windows was introduced along with the 3.0 version for Mac at Apple's WWDC conference in 2007, in an effort to increase overall Safari market share. It supported Windows XP and Vista at launch. Wired, in a review, praised its speed but criticised bugs at launch. After Safari's release, Apple Software Update, an updater program bundled with QuickTime and iTunes for Windows, automatically selected Safari for installation, as a "recommended" program. This was criticized by John Lilly, then-CEO of Mozilla, who said it "borders on malware distribution practices". By late 2008, Apple Software Update stopped installing new software by default, though it still offered Safari in its list of available programs (with its checkbox unticked). Safari for Windows was discontinued after version 5.1.7 released in 2010. Market share In 2009, Safari had a market share of 3.85%. It remained stable in that rank for five years with market shares of 5.56% (2010), 7.41% (2011), 10.07% (2012), and 11.77% (2013). In 2014, it caught up with Firefox with a market share of 14.20%. In 2015, Safari became the second most-used web browser worldwide after Google Chrome, and had a market share of 13.01%. From 2015 to 2020, it occupied market shares of 14.02%, 14.86%, 14.69%, 17.68% and 19.25%, respectively. As of November 2021[update], Google Chrome continued to be the most popular browser with Safari (19.22%) following behind in second place. In May 2022, according to StatCounter, Apple's Safari dropped to the third most popular desktop browser after being overtaken by Microsoft Edge. Safari was then used by 9.61% of desktop computers worldwide. One year later, Safari retook second place. Criticism Software security firm Sophos detailed how Snow Leopard and Windows users were not supported by the Safari 6 release at the time, while there were over 121 vulnerabilities left unpatched on those platforms. Since then, Snow Leopard has had only three minor version releases of Safari (the most recent in September 2013), and Windows has had none. While no official word has been released by Apple, the indication is that these are the final versions available for these operating systems, and both retain significant security issues. Apple has been criticized for anticompetitive practices related to Safari on iOS. Before iOS 14 (2020), users could not change their default browser, so links always opened in Safari. App Store rules still require all third-party iOS browsers to use Safari's WebKit browser engine, inheriting its limitations. Apple's stated motivation for this browser engine restriction was to increase security, an argument disputed by the UK's Competition and Markets Authority. The European Union's Digital Markets Act regulation, passed in 2022, requires Apple to allow alternative browser engines. In response, Google and Mozilla began porting their browser engines to iOS. In November 2023, during the US's search engine antitrust trial against Google, an economics professor at the University of Chicago revealed that Google pays Apple 36% of all search advertising revenue generated when users access Google through the Safari browser. These payments reached $20 billion in 2022, according to Eddy Cue, Apple's senior vice president of services. Both Apple and Google have argued that disclosing the specific terms of their search default agreement would harm their competitive positions. However, the court ruled that the information was relevant to the antitrust case and ordered its disclosure. This revelation has raised concerns about the dominance of Google in the search engine market and the potential anticompetitive effects of its agreements with Apple. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-71405-1] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Modulo_operation] | [TOKENS: 3223]
Contents Modulo In computing and mathematics, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, the latter being called the modulus of the operation. Given two positive numbers a and n, a modulo n (often abbreviated as a mod n) is the remainder of the Euclidean division of a by n, where a is the dividend and n is the divisor. For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0. Although typically performed with a and n both being integers, many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation of n is 0 to n − 1. a mod 1 is always 0. When exactly one of a or n is negative, the basic definition breaks down, and programming languages differ in how these values are defined. Variants of the definition In mathematics, the result of the modulo operation is an equivalence class, and any member of the class may be chosen as representative; however, the usual representative is the least positive residue, the smallest non-negative integer that belongs to that class (i.e., the remainder of the Euclidean division). However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on the programming language or the underlying hardware. In nearly all computing systems, the quotient q and the remainder r of a divided by n ≠ 0 {\displaystyle n\neq 0} satisfy the following conditions: This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive; that choice determines which of the two consecutive quotients must be used to satisfy equation (1). In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs of a or n.[a] Standard Pascal and ALGOL 68, for example, give a positive remainder (or 0) even for negative divisors, and some programming languages, such as C90, leave it to the implementation when either of n or a is negative (see the table under § In programming languages for details). Some systems leave a modulo 0 undefined, though others define it as a. Many implementations use truncated division, for which the quotient is defined by q = trunc ⁡ ( a n ) {\displaystyle q=\operatorname {trunc} \left({\frac {a}{n}}\right)} where trunc {\displaystyle \operatorname {trunc} } is the integral part function (rounding toward zero), i.e. the truncation to zero significant digits. Thus according to equation (1), the remainder has the same sign as the dividend a so can take 2|n| − 1 values: r = a − n trunc ⁡ ( a n ) {\displaystyle r=a-n\operatorname {trunc} \left({\frac {a}{n}}\right)} Donald Knuth promotes floored division, for which the quotient is defined by q = ⌊ a n ⌋ {\displaystyle q=\left\lfloor {\frac {a}{n}}\right\rfloor } where ⌊ ⌋ {\displaystyle \lfloor \,\rfloor } is the floor function (rounding down). Thus according to equation (1), the remainder has the same sign as the divisor n: r = a − n ⌊ a n ⌋ {\displaystyle r=a-n\left\lfloor {\frac {a}{n}}\right\rfloor } Raymond T. Boute promotes Euclidean division, for which the non-negative remainder r ∈ { 0 , 1 , 2... } {\displaystyle r\in \{0,1,2...\}} is defined by r := a − n q s u c h t h a t 0 ≤ r < | n | . {\displaystyle r:=a-nq\ \mathrm {such\ that} \ {\color {red}{0\leq r}}<|n|.} (Emphasis added.) Under this definition, we can say the following about the quotient q {\displaystyle q} : q = a − r n ∈ Z = sgn ( n ) ⋅ a − r | n | = sgn ( n ) ⋅ ( a | n | − r | n | ) = sgn ( n ) ⋅ ⌊ a | n | ⌋ {\displaystyle {\begin{aligned}q&={\frac {a-r}{n}}\in \mathbb {Z} \\&={\text{sgn}}(n)\cdot {\frac {a-r}{|n|}}\\&={\text{sgn}}(n)\cdot \left({\frac {a}{|n|}}-{\frac {r}{|n|}}\right)\\&={\text{sgn}}(n)\cdot \left\lfloor {\frac {a}{\left|n\right|}}\right\rfloor \end{aligned}}} where sgn is the sign function, ⌊ ⌋ {\displaystyle \lfloor \,\rfloor } is the floor function (rounding down), and a | n | ∈ Q {\displaystyle {\frac {a}{|n|}}\in \mathbb {Q} } , r | n | ∈ Q {\displaystyle {\frac {r}{|n|}}\in \mathbb {Q} } are rational numbers. Equivalently, one may instead define the quotient q ∈ Z {\displaystyle q\in \mathbb {Z} } as follows: q := sgn ⁡ ( n ) ⌊ a | n | ⌋ = { ⌊ a n ⌋ if n > 0 ⌈ a n ⌉ if n < 0 {\displaystyle q:=\operatorname {sgn}(n)\left\lfloor {\frac {a}{\left|n\right|}}\right\rfloor ={\begin{cases}\left\lfloor {\frac {a}{n}}\right\rfloor &{\text{if }}n>0\\\left\lceil {\frac {a}{n}}\right\rceil &{\text{if }}n<0\\\end{cases}}} where ⌈ ⌉ {\displaystyle \lceil \,\rceil } is the ceiling function (rounding up). Thus according to equation (1), the remainder r {\displaystyle r} is non-negative: r = a − n q = a − | n | ⌊ a | n | ⌋ {\displaystyle r=a-nq=a-|n|\left\lfloor {\frac {a}{\left|n\right|}}\right\rfloor } Common Lisp and IEEE 754 use rounded division, for which the quotient is defined by q = round ⁡ ( a n ) {\displaystyle q=\operatorname {round} \left({\frac {a}{n}}\right)} where round is the round function (rounding half to even). Thus according to equation (1), the remainder falls between − n 2 {\displaystyle -{\frac {n}{2}}} and n 2 {\displaystyle {\frac {n}{2}}} , and its sign depends on which side of zero it falls to be within these boundaries: r = a − n round ⁡ ( a n ) {\displaystyle r=a-n\operatorname {round} \left({\frac {a}{n}}\right)} Common Lisp also uses ceiling division, for which the quotient is defined by q = ⌈ a n ⌉ {\displaystyle q=\left\lceil {\frac {a}{n}}\right\rceil } where ⌈⌉ is the ceiling function (rounding up). Thus according to equation (1), the remainder has the opposite sign of that of the divisor: r = a − n ⌈ a n ⌉ {\displaystyle r=a-n\left\lceil {\frac {a}{n}}\right\rceil } If both the dividend and divisor are positive, then the truncated, floored, and Euclidean definitions agree. If the dividend is positive and the divisor is negative, then the truncated and Euclidean definitions agree. If the dividend is negative and the divisor is positive, then the floored and Euclidean definitions agree. If both the dividend and divisor are negative, then the truncated and floored definitions agree. However, truncated division satisfies the identity ( − a ) / b = − ( a / b ) = a / ( − b ) {\displaystyle ({-a})/b={-(a/b)}=a/({-b})} . Notation Some calculators have a mod() function button, and many programming languages have a similar function, expressed as mod(a, n), for example. Some also support expressions that use "%", "mod", or "Mod" as a modulo or remainder operator, such as a % n or a mod n. For environments lacking a similar function, any of the three definitions above can be used. Common pitfalls When the result of a modulo operation has the sign of the dividend (truncated definition), it can lead to surprising mistakes. For example, to test if an integer is odd, one might be inclined to test if the remainder by 2 is equal to 1: But in a language where modulo has the sign of the dividend, that is incorrect, because when n (the dividend) is negative and odd, n mod 2 returns −1, and the function returns false. One correct alternative is to test that the remainder is not 0 (because remainder 0 is the same regardless of the signs): Or with the binary arithmetic: Performance issues Modulo operations might be implemented such that a division with a remainder is calculated each time. For special cases, on some hardware, faster alternatives exist. For example, the modulo of powers of 2 can alternatively be expressed as a bitwise AND operation (assuming x is a positive integer, or using a non-truncating definition): Examples: In devices and software that implement bitwise operations more efficiently than modulo, these alternative forms can result in faster calculations. Compiler optimizations may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression & (constant-1), allowing the programmer to write clearer code without compromising performance. This simple optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend (including C), unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, whereas expression & (constant-1) will always be positive. For these languages, the equivalence x % 2n == x < 0 ? x | ~(2n - 1) : x & (2n - 1) has to be used instead, expressed using bitwise OR, NOT and AND operations. Optimizations for general constant-modulus operations also exist by calculating the division first using the constant-divisor optimization. Properties (identities) Some modulo operations can be factored or expanded similarly to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange. The properties involving multiplication, division, and exponentiation generally require that a and n are integers. In programming languages In addition, many computer systems provide a divmod functionality, which produces the quotient and the remainder at the same time. Examples include the x86 architecture's IDIV instruction, the C programming language's div() function, and Python's divmod() function. Generalizations Sometimes it is useful for the result of a modulo n to lie not between 0 and n − 1, but between some number d and d + n − 1. In that case, d is called an offset and d = 1 is particularly common. There does not seem to be a standard notation for this operation, so let us tentatively use a modd n. We thus have the following definition: x = a modd n just in case d ≤ x ≤ d + n − 1 and x mod n = a mod n. Clearly, the usual modulo operation corresponds to zero offset: a mod n = a mod0 n. The operation of modulo with offset is related to the floor function as follows: a mod d ⁡ n = a − n ⌊ a − d n ⌋ . {\displaystyle a\operatorname {mod} _{d}n=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor .} To see this, let x = a − n ⌊ a − d n ⌋ {\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor } . We first show that x mod n = a mod n. It is in general true that (a + bn) mod n = a mod n for all integers b; thus, this is true also in the particular case when b = − ⌊ a − d n ⌋ {\textstyle b=-\!\left\lfloor {\frac {a-d}{n}}\right\rfloor } ; but that means that x mod n = ( a − n ⌊ a − d n ⌋ ) mod n = a mod n {\textstyle x{\bmod {n}}=\left(a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor \right)\!{\bmod {n}}=a{\bmod {n}}} , which is what we wanted to prove. It remains to be shown that d ≤ x ≤ d + n − 1. Let k and r be the integers such that a − d = kn + r with 0 ≤ r ≤ n − 1 (see Euclidean division). Then ⌊ a − d n ⌋ = k {\textstyle \left\lfloor {\frac {a-d}{n}}\right\rfloor =k} , thus x = a − n ⌊ a − d n ⌋ = a − n k = d + r {\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor =a-nk=d+r} . Now take 0 ≤ r ≤ n − 1 and add d to both sides, obtaining d ≤ d + r ≤ d + n − 1. But we've seen that x = d + r, so we are done. The modulo with offset a modd n is implemented in Mathematica as Mod[a, n, d] . Despite the mathematical elegance of Knuth's floored division and Euclidean division, it is generally much more common to find a truncated division-based modulo in programming languages. Leijen provides the following algorithms for calculating the two divisions given a truncated integer division: For both cases, the remainder can be calculated independently of the quotient, but not vice versa. The operations are combined here to save screen space, as the logical branches are the same. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Apple_Music] | [TOKENS: 4853]
Contents Apple Music Apple Music is a music streaming service launched by Apple in June 2015. The service is available in 167 countries and currently has over 100 million songs. Apple Music runs the free internet radio stations Apple Music 1, Apple Music Hits, Apple Music Country, Apple Música Uno, Apple Music Club, and Apple Music Chill which are broadcast live 24 hours a day. Originally strictly a music service, Apple Music began expanding into video in 2016. Then-executive Jimmy Iovine stated that the intention for the service is to become a "cultural platform", and Apple reportedly wants the service to be a "one-stop shop for pop culture". The company is actively investing heavily in the production and purchasing of video content, both in terms of music videos and concert footage that support music releases, as well as web series and feature films.[citation needed] Apple Music gained popularity rapidly after its launch, reaching 10 million subscribers in six months.[citation needed] As of May 2023[update], the most streamed song of all time on Apple Music is "Shape of You" by Ed Sheeran with more than 930 million plays worldwide. Features Apple Music subscribers can create a profile to share their music with friends and follow other users to view the music they are listening to on a regular basis. Apple Music's use of iCloud, which matches a users' songs to those found on the service, allows users to combine their iTunes music library with their Apple Music library and listen to their music all in one place. In 2018, with the release of iOS 12, Apple Music added the ability to search for a song by its lyrics. Users also have the ability to view their most played songs, artists, and albums of the entire year through a feature called Apple Music replay. This feature tracks listening times down to the minute giving users accurate information on how much they may have listened to a specific song, artist, or album. The feature also tells the user specific genres that they listened to throughout the year, placing them in order from most listened-to to least listened-to. Apple Music Replay also provides an interactive system where it plays a generated animation that recaps what the user's activity was like over the past year, along with a milestone section that shows specific goals they reached. History Before Apple Music, the company's iPod and iTunes were known for having "revolutionized digital music." Former Apple CEO Steve Jobs was known to be opposed to the idea of music subscription services. When Apple bought audio equipment maker Beats Electronics in 2014, Apple gained ownership of Beats' own service Beats Music, and made Beats Music CEO Ian Rogers responsible for the iTunes Radio service. Business Insider later reported that Apple was planning to merge the two services. Apple also hired noted New Zealand born British radio DJ Zane Lowe to serve as a music curator. Shortly before Apple Music was released, singer-songwriter Taylor Swift wrote an open letter publicly criticizing Apple's decision to not reimburse artists during a user's free trial period and announced that she would be holding back her album 1989 from the service. She said the policy was "unfair" as "Apple Music will not be paying writers, producers, or artists for those months". UK independent record label Beggars Group also criticized the trial period, saying it struggled "to see why rights owners and artists should bear this aspect of Apple's customer acquisition costs". The day after Swift's letter, Apple Senior Vice President of Internet Software and Services Eddy Cue announced on Twitter that Apple had changed its policy, and that Apple Music "will pay artists for streaming, even during customers' free trial period". On Twitter, Swift wrote "After the events of this week, I've decided to put 1989 on Apple Music... And happily so". She concluded saying it was "the first time it's felt right in my gut to stream my album". In negotiations with record labels for the new service, Apple allegedly attempted to encourage record labels to pull their content from the free, ad-supported tiers of competing services such as Spotify and Amazon Music in order to drive adoption of Apple Music and offered an incentive to Universal Music Group to pull its content from YouTube. The United States Department of Justice and Federal Trade Commission launched an investigation into this alleged cartel in May 2015. After rumors and anticipation, Sony Music CEO Doug Morris confirmed on June 7, 2015, that Apple had plans to announce a music streaming service, saying "It's happening tomorrow," with the launch later in the month. Morris emphasized several times that he prefers paid streaming as opposed to ad-supported, from a financial perspective. Furthermore, Morris said he expects the service to be the "tipping point" to accelerate the growth of streaming, along with arguing that Apple has "$178 billion dollars in the bank. And they have 800 million credit cards in iTunes." as opposed to Spotify, which "never really advertised because it's never been profitable". Morris further argued that "Apple will promote this like crazy and I think that will have a halo effect on the streaming business. A rising tide will lift all boats. It's the beginning of an amazing moment for our industry." The announcement happened as the signature "one more thing..." reveal at Apple's conference. Hip hop artist Drake appeared onstage at the announcement event to elaborate on how he used the Connect platform, and Apple subsequently emphasized how "Independent music can share their music on Connect, too", in contrast to the iTunes Store, where small, independent artists were finding it difficult to participate. Apple Music launched on June 30, 2015, in 100 countries. Earlier, new users used to receive a three-month free trial subscription, which changed to a monthly fee after three months. The trial lasts for a month now. A family plan allows six users to share a subscription at a reduced rate. Apple originally sought to enter the market at a lower price point for the service, but the music industry rejected the plan. The service debuted as an updated Music app on the iOS 8.4 update. Apple TV and Android device support was planned for a "fall" 2015 launch. A previously unreleased song by Pharrell Williams, entitled "Freedom", was used in promotional material and announced as an exclusive release on the launch of the service. The "History of Sound" advert for the launch of the Apple Music service was soundtracked by the tune There Is No Light by Wildbirds & Peacedrums, from their 2009 album The Snake. Upon its launch, Beats Music subscriptions and playlists were migrated to Apple Music, and the service was discontinued. In October 2015, Drake and Apple signed a deal to release the music video for "Hotline Bling" exclusively on Apple Music. In December, Apple released an exclusive Taylor Swift tour documentary, called The 1989 World Tour, on Apple Music. In February 2016, The Hollywood Reporter reported that Dr. Dre would be starring in and executive producing a "dark semi-autobiographical drama" called Vital Signs. The production was described as "Apple's first scripted television series". Recode subsequently reported a few days later that the announcement of Dr. Dre's production was an effort to "extend Apple Music" in promotional ways rather than Apple actively exploring original television content. Citing Apple's deals with Drake and Swift in October and December 2015, respectively, the report referenced a Twitter user describing Apple's efforts as "content marketing". In November 2015, Apple launched the Android version of Apple Music, touted by reporters as Apple's first "real" or "user-centric" Android app. The app was updated in April 2017 to match the service's iOS 10 design. In January 2016, Fortune reported that, six months after launching, Apple Music had reached 10 million paying subscribers, having spent six months reaching the same customer base that took competing music streaming service Spotify six years. This customer base increased to 11 million subscribers in February, 13 million in April, 15 million in June, 17 million in September, 20 million in December, 27 million in June 2017, 36 million in February 2018, 38 million in March 2018 (just five weeks after the previous milestone), 40 million in April 2018, 50 million as of May 2018, 56 million as of December 2018, and 60 million as of June 2019. In February 2016, Music Business Worldwide reported that, with Apple Music having launched in Turkey and Taiwan in the previous week, the service was available in 113 countries. The publication further wrote that those countries accounted for 59 regions that competing service Spotify did not. In August 2016, Apple Music was launched in Israel and South Korea. In May 2016, a student membership was announced, that discounted the regular price of a subscription by 50%. The student plan was initially only available for eligible students in the United States, United Kingdom, Germany, Denmark, Ireland, Australia, and New Zealand, but was expanded to an additional 25 countries in November 2016. In July 2016, Apple bought Carpool Karaoke from The Late Late Show with James Corden, with Variety writing that Apple was planning to distribute the series through Apple Music. Apple's adaptation of the series was originally supposed to premiere in April 2017, but was delayed without explanation. The series instead premiered on August 8, 2017. Apple added personalized music playlists to the service, with the September 2016 launch of "My New Music Mix", and the June 2017 launch of "My Chill Mix". In January 2017, The Wall Street Journal reported that Apple was exploring original video content, including its own television series and movies. A few days later, Apple Music executive Jimmy Iovine confirmed the reports about the move towards video, and in February, he announced that Apple Music would launch its first two television-style series in 2017, with the aim to turn Apple Music into a "cultural platform". In March 2017, The Information reported that Apple had recently hired several people to help evolve its video platform, including YouTube product manager Shiva Rajaraman. In April 2017, it was announced that Apple Music would be the exclusive home to Sean Combs's documentary "Can't Stop, Won't Stop: A Bad Boy Story", which premiered June 25. On the same day, Bloomberg Businessweek reported that artist Will.i.am would make a reality show for Apple Music, in an effort to turn the service into a "one-stop shop for pop culture". The reality show was later revealed to be called Planet of the Apps, and will focus on the "app economy". The series has cast 100 developers, and premiered on June 6, 2017. In June 2017, Apple hired two television executives from Sony, Jamie Erlicht and Zack Van Amburg. The two have jointly held the title of "President" at Sony, and have helped develop shows including Breaking Bad and Shark Tank. The hiring was noted by the media as another significant effort by Apple to expand into original video productions. In early December 2017, Apple hired Michelle Lee, a programming veteran, as a creative executive of Apple's original video team, and a few days later, also hired Philip Matthys and Jennifer Wang Grazier from Hulu and Legendary Entertainment, respectively. On November 30, 2018, Apple added support for Apple Music on Amazon Echo speakers, after previously only being accessible on Apple's own HomePod speakers. On December 13, 2018, Apple discontinued Apple Music's "Connect" feature in favor for their redesigned approach to artist profiles and the ability for users to share their music and playlists with friends and followers introduced in iOS 11. On September 5, 2019, Apple released the first version of an Apple Music web player in beta. The web player gives users full access to their music libraries along with similar features from the Apple Music app, while it is missing key features that are expected to be added later. A Windows 11 app was released in beta in January 2023, to replace the aging iTunes for Windows. On November 15, 2019, Apple released a new Apple Music feature called Apple Music Replay, which is a year-end playlist showing users their favorite tracks of the entire year, a feature similar to Spotify Wrapped. On November 20, 2019, Apple introduced Apple Music for Business, offering customized playlists for partnered retailers, while also revealing that the platform's catalog now hosted over 60 million songs. In 2020, Apple Music sealed deals with Universal Music Group, Sony Music and Warner Music Group for further promotion and streaming allowance of songs from artists on their labels. On April 21, 2020, Apple announced that Apple Music would be expanding to an additional 52 countries around the world bringing the total to 167 worldwide. On October 19, 2020, Apple launched Apple Music TV via Apple Music and the Apple TV app in the United States. Apple Music TV is a free, continuous 24/7 livestream focused on music videos, akin to the early days of MTV. Apple Music TV plans on having premieres of new music videos occur every Friday at 12PM ET, as well as occasional artist and themed takeovers, airings of Apple Music original documentaries and films, live events and shows, and chart countdowns. The service launched with a countdown of the 100 most streamed songs in the US of all time on Apple Music. From October 30, 2020, Apple Music was included in the Apple One bundle along with several other Apple services such as News, iCloud, Arcade, and TV Plus. On May 17, 2021, Apple announced that Apple Music would begin offering lossless audio via the ALAC codec in June 2021, along with music mixed in Dolby Atmos, all at no additional cost to Apple Music subscribers. In July 2021, the Android version of the app also received support for lossless and spatial audio with Dolby Atmos, though the features were not mentioned in the update release notes. By December 28, 2021, Apple Music had upgraded its entire catalogue of 90 million tracks to have lossless audio. On October 19, 2021, Apple introduced the discounted Apple Music Voice plan at $4.99/month, which limits subscribers to only accessing the service's music library and playback features through Siri. The plan was later discontinued on November 1, 2023, with no explanation. On October 27, 2021, Sony announced that Apple Music would become available on the PlayStation 5. On May 17, 2022, Apple Music announced Apple Music Live, a new concert series that kicks off with Harry Styles live from New York on May 20. On June 24, 2022, Apple Music increased the price of its student plan, available for eligible college students, from $4.99 to $5.99 per month in the U.S. It represented the first price increase for any plan since Apple Music's launch in the country. Similar price increases also occurred to student plans in the U.K. and Canada at the same time. On September 22, 2022, Apple announced that it has signed a multi-year deal with the NFL to have Apple Music become the main sponsor of the Super Bowl halftime show beginning with Super Bowl LVII. On October 12, 2022, Apple Music became available for the Xbox One and Xbox Series X/S. On October 24, 2022, Apple announced it was to increase pricing of standard Apple Music subscriptions (along with Apple TV+ and Apple One) in many regions. The Individual plan increased $1 to $10.99/month, the Family plan increased $2 to $16.99/month, and the Annual plan for individuals increased $10 to $109/year. With the release of iOS 16.2 on December 13, 2022, Apple introduced the "Apple Music Sing" karaoke feature, which introduces real-time lyrics and on supported devices a new slider which allows for the volume of vocals to be adjusted independently from a track's instrumentals on supported songs. Apple partnered with Bharti Airtel to provide its music and video streaming services to the telecom company's premium clients in India from 2024 at no cost. In February 2024, Djay added support for DJing with tracks from Apple Music on MacOS, Windows, Android, iPad, iPhone, and Vision Pro. In March 2025, Apple announced "DJ with Apple Music", expanding compatibility with DJ software to Rekordbox, Serato, and Engine DJ and adding support for some stand-alone DJ hardware from AlphaTheta (OMNIS-DUO and XDJ-AZ), Denon, and Numark. However, stem mixing functionality is disabled with streaming music. As of August 2025, Apple Music had 94 million subscribers. On August 27, 2025 it was announced that Taylor Swift was the most-favorited artist in the platform. Apple Music Classical On August 13, 2021, Apple acquired classical music streaming service Primephonic, and announced that it would become the basis for a new Apple Music app dedicated to classical music, planned to launch in 2022. In March 2023 Apple released Apple Music Classical on iOS, after announcing the service in 2021. Apple Music Classical is included with an Apple Music subscription. It focuses on Classical music; whereas Apple Music has wide genres of music. The Android app was released on May 30, 2023; and the iPad app was released on November 16, 2023. On September 5, 2023, Apple acquired the classical music label BIS Records. Apple Music Awards Production library Reception Apple Music received mixed reviews at launch. Among the criticism, reviewers wrote that the user interface was "not intuitive", and an "embarrassing and confusing mess". They also wrote about battery life problems. However, the service was praised for its smart functions. Christina Warren of Mashable noted the emphasis on human curation in Apple Music, pointing out the various human-curated radio stations and the accuracy of the curated playlists recommended to users in the "For Me" section. The author concluded saying "[The] For Me section alone has made me excited about music for the first time in a long time." Sam Machkovech of Ars Technica wrote that Apple's emphasis on unsigned artist participation in the Connect feature could be an effort to restore the company's former reputation as a "tastemaker" in the mid-2000s. Apple Music's major redesign in iOS 10 received more positive reviews. Caitlin McGarry of Macworld praised Apple for having "cleaned up the clutter, reconsidered the navigation tools, put your library front and center, and added algorithmically created playlists to rival Spotify's." She noted bigger fonts, large amounts of white space, and she welcomed changes to various functionalities, concluding with the statement that "Apple Music's redesign is a huge improvement over its previous incarnation, and a clear sign that Apple is listening to its customers". However, another Macworld editor, Oscar Raymundo, criticized the new design, writing that "Apple Music in iOS 10 is not as elegant or intuitive as Apple promised. The music service added more needless options, key actions like repeat got buried, and the For You section leaves a lot to be desired". Jordan Novet of VentureBeat wrote positively about the changes, stating "Apple has improved the overall design, as well as the experience". In December 2017, singer-songwriter Neil Young released a new archive as part of his Neil Young Archives project and criticized Apple for the audio quality offered by its Apple Music streaming service, stating: "Apple Music controls the audio quality that is served to the masses and chooses to not make high quality available, reducing audio quality to between 5 percent and 20 percent of the master I made in the studio in all cases. So, the people hear 5 percent to 20 percent of what I created. ... Apple not offering a top-quality tier has led labels to stop making quality products available to the masses". Young's claim, however, did not stand up to technical scrutiny, with Apple delivering an industry-standard high-quality bitrate of 256 kbit/s AAC, slightly edging out Spotify in quality, which uses a 320 kbit/s Ogg Vorbis bitrate. The implementation of iCloud Music Library caused significant issues for users. There were reports about music libraries being impacted by issues such as tracks moved to other albums, album art not matching the music, duplicate artists and songs, missing tracks, and synchronization problems. Mashable wrote that "Apple has not yet publicly acknowledged the problem or responded to our request for comment". iCloud Music Library has also been reported to delete music from users' local storage, though this has been disputed by other publications as caused by user error or another application. Additionally, the feature was reported to have replaced uploaded content with a version locked with digital rights management. In July 2016, Apple switched the matching technology to incorporate features identical to iTunes Match, specifically the use of "audio fingerprints" to scan sound data. The new technology also removed DRM from downloaded matched songs. In August 2016, Frank Ocean released Blonde exclusively on Apple Music. The decision was made by Ocean independently, without Def Jam Recordings, his former label, being a part of the deal. The exclusive deal reportedly "ignited a music streaming war". The move followed in the footsteps of other artists, including Adele, Coldplay, Future, Drake, Beyoncé, Rihanna, and Kanye West, who released albums on exclusive terms with music streaming competitors of leading service Spotify. Jonathan Prince, Spotify's head of communications, told The Verge that "We're not really in the business of paying for exclusives, because we think they're bad for artists and they're bad for fans. Artists want as many fans as possible to hear their music, and fans want to be able to hear whatever they're excited about or interested in — exclusives get in the way of that for both sides. Of course, we understand that short promotional exclusives are common and we don't have an absolute policy against them, but we definitely think the best practice for everybody is wide release". After a 2 week period, Blonde was released on Spotify. Ocean's independent move to Apple Music exclusivity caused "a major fight in the music industry", and Universal Music Group reportedly banned the practice of exclusive releases for its signed artists. Soon after, several major record labels followed Universal, marking a significant change in the industry. According to unnamed label executives, Spotify had also introduced a new policy that said that the service would not give the same level of promotion once an album arrives on Spotify after other services, including not being prominently featured in playlists. Rolling Stone wrote in October 2016 that "if you wanted to keep up with new albums by Beyoncé, Drake, Frank Ocean, and Kanye West, among many others, you would have had to subscribe to not one but two streaming services", adding, "But over the past few months, a backlash has developed against this new reality". Lady Gaga told Apple Music's Beats 1 radio, "I told my label that if they signed those contracts with Apple Music and Tidal, I'd leak all my own new music". In May 2017, Apple Music executive Jimmy Iovine told Music Business Worldwide, "We tried it. We'll still do some stuff with the occasional artist. The labels don't seem to like it and ultimately it's their content." See also References and Footnotes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Spotify] | [TOKENS: 14333]
Contents Spotify Spotify[a] is a Swedish audio streaming and media service provider founded in April 2006 by Daniel Ek and Martin Lorentzon. As of December 2025[update], it was one of the largest providers of music streaming services, with over 751 million monthly active users comprising 290 million paying subscribers. Spotify is listed on the New York Stock Exchange in the form of American depositary receipts. Spotify offers DRM-protected audio content, including over 100 million songs and over 7 million podcast titles, from record labels and media companies. Operating as a freemium service, the basic features are free with advertisements and limited control, while additional features, such as offline listening and commercial-free listening, are offered via paid subscriptions. Users can search for music based on artist, album, or genre, and can create, edit, and share playlists. It offers some social media features such as messaging, creating profiles, following friends, shared playlists, and creating listening parties called "Jams". As of December 2022, Spotify is available in most of Europe, as well as Africa, the Americas, Asia, and Oceania, with availability in a total of 184 markets. Its users and subscribers are based largely in the US and Europe, jointly accounting for around 53% of users and 67% of revenue. It has no presence in mainland China, where the market is dominated by QQ Music. The service is available on most devices, including Windows, macOS, and Linux computers, iOS and Android smartphones and tablets, Smart Home devices such as the Amazon Echo and Google Nest lines of products, and digital media players like Roku. As of December 2023, Spotify was the 47th most-visited website in the world with 24.78% of its traffic coming from the United States followed by Brazil with 6.51% according to data provided by Semrush. As of 2022, Spotify is the current sponsor of Spanish football club FC Barcelona, with music artists like Drake, Travis Scott, and Ed Sheeran collaborating with the club by changing their shirts into their artist logo, sometimes used for album promotions. Unlike physical or download sales, which pay artists a fixed price per song or album sold, Spotify pays royalties based on the number of artist streams as a proportion of total songs streamed. It distributes approximately 70% of its total revenue to rights holders (often record labels), who then pay artists based on individual agreements. While certain musicians laud the service for offering a lawful option to combat piracy and for remunerating artists each time their music is played, others have voiced objections to Spotify's royalty structure and its effect on record sales. History Spotify was founded in 2006 in Stockholm, Sweden, by Daniel Ek, former chief technology officer of Stardoll, and Martin Lorentzon, co-founder of Tradedoubler. Ek first had the idea for Spotify around 2002 when peer-to-peer music service Napster shut down and another illegal site Kazaa became popular. Ek said he "realized that you can never legislate away from piracy. Laws can definitely help, but it doesn't take away the problem. The only way to solve the problem was to create a service that was better than piracy and at the same time compensates the music industry – that gave us Spotify." According to Ek, the company's title was initially misheard from a name shouted by Lorentzon. Later they conceived a portmanteau of "spot" and "identify". Ek's initial pitch to Lorentzon was not initially related to music, but rather a way for streaming content such as video, digital films, images or music to drive advertising revenue. In February 2009, Spotify opened public registration for the free service tier in the United Kingdom. Registrations surged following the release of the mobile service, leading Spotify to halt registration for the free service in September, returning the UK to an invitation-only policy. Spotify launched in the United States in July 2011, and offered a six-month, ad-supported trial period, during which new users could listen to an unlimited amount of music for free. In January 2012, the free trial periods began to expire, limiting users to ten hours of streaming each month and five plays per song. Using PC streaming, a similar structure to the one used today allowed the listener to play songs freely, but with ads every 4–7 songs depending on listening duration. Later that same year, in March, Spotify removed all limits on the free service tier indefinitely, including mobile devices. In April 2016, Ek and Lorentzon wrote an open letter to Swedish politicians, demanding action in three areas that they claimed hindered the company's ability to recruit top talent as Spotify grew, including access to flexible housing, better education in the programming and development fields, and stock options. Ek and Lorentzon wrote that to continue competing in a global economy, politicians needed to respond with new policies, or thousands of Spotify jobs would be moved from Sweden to the United States. In February 2017, Spotify announced the expansion of its United States operations in Lower Manhattan, New York City, at 4 World Trade Center, adding approximately 1,000 new jobs and retaining 832 existing positions. The company's US headquarters are in New York City's Flatiron District. On 14 November 2018, the company announced 13 new markets in the MENA region, including the creation of a new Arabic hub and several playlists. In October 2015, "Thinking Out Loud" by Ed Sheeran became the first song to pass 500 million streams. A month later, Spotify announced that "Lean On" by Major Lazer and DJ Snake featuring MØ was its most-streamed song of all time with over 525 million streams worldwide. In April 2016, Rihanna overtook Justin Bieber to become the biggest artist on Spotify, with 31.3 million monthly active listeners. In May 2016, Rihanna was overtaken by Drake with 31.85 million monthly listeners. In December 2016, Drake's just-under 36 million monthly listeners were overtaken by the Weeknd's 36.068 million. Later that same month, Drake's song "One Dance" became the first song to hit one billion streams on Spotify. Upon its release in August 2017, the single "Look What You Made Me Do" by Taylor Swift earned over eight million streams within 24 hours, breaking the record for the most single-day streams for a track. On 19 June 2018, XXXTentacion's hit single "Sad!" broke Swift's single-day streaming record, amassing 10.4 million streams the day after he was fatally shot in Florida. In March 2011, Spotify announced a customer base of 1 million paying subscribers across Europe, and by September 2011, the number of paying subscribers had doubled to two million. In August 2012, Time reported 15 million active users, four million being paying Spotify subscribers. User growth continued, reaching 20 million total active users, including five million paying customers globally and one million paying customers in the United States, in December 2012. By March 2013, the service had 24 million active users, six million being paying subscribers, which grew to 40 million users (including ten million paying) in May 2014, 60 million users (including 15 million paying) in December 2014, 75 million users (20 million paying) in June 2015, 30 million paying subscribers in March 2016, 40 million paying subscribers in September 2016, and 100 million total users in June 2016. In April 2020, Spotify reached 133 million premium users. In countries affected by the COVID-19 pandemic, Spotify registered a fall in users in late February, but it has seen a recovery. In March 2022, Spotify had 182 million premium subscribers. At the end of Q2 2022, Spotify reported 188 million paying subscribers and 433 million total users. At the end of Q3 2024, Spotify reported 252 million subscribers and 640 million monthly active users. In February 2026, Spotify announced that it had added a record number of new users in a single quarter, adding 38 million new users in the three months to the end of December 2025 to reach 751 million monthly users. The Financial Times reported in March 2017 that, as part of its efforts to renegotiate new licensing deals with music labels, Spotify and major record labels had agreed that Spotify would restrict some newly released albums to its Premium tier, with Spotify receiving a reduction in royalty fees to do so. Select albums would be available only on the Premium tier for a period of time, before general release. The deal "may be months away from being finalized, but Spotify is said to have cleared this particular clause with major record labels". New reports in April confirmed that Spotify and Universal Music Group had reached an agreement to allow artists part of Universal to limit their new album releases to the Premium service tier for a maximum of two weeks. Ek commented that "We know that not every album by every artist should be released the same way, and we've worked hard with UMG to develop a new, flexible release policy. Starting today, Universal artists can choose to release new albums on premium only for two weeks, offering subscribers an earlier chance to explore the complete creative work, while the singles are available across Spotify for all our listeners to enjoy". It was announced later in April that this type of agreement would be extended to indie artists signed to the Merlin Network agency. Spotify went public on the stock market in April 2018 using a direct public offering rather than an initial public offering. This approach is not intended to raise fresh capital, but to let investors get their returns. Morgan Stanley is the company's slated advisor on the matter. After making its debut on the New York Stock Exchange on 3 April 2018, CNBC reported that Spotify opened at $165.90, more than 25% above its reference price of $132. On 3 July 2020, cybersecurity firm VPNMentor discovered a database containing 380 million individual records, including the logins and passwords of Spotify users. The database was thought to be evidence of an impending credential stuffing cyberattack targeting Spotify as it contained the credentials of up to 350,000 compromised user accounts. In response to the attack, Spotify issued a rolling reset of passwords for affected accounts in November 2020. In May 2013, Spotify acquired music discovery app Tunigo. In March 2014, they acquired The Echo Nest, a music intelligence company. In June 2015, Spotify announced they had acquired Seed Scientific, a data science consulting firm and analytics company. In a comment to TechCrunch, Spotify said that Seed Scientific's team would lead an advanced analytics unit within the company, focused on developing data services. In January 2016, they acquired social and messaging startups Cord Project and Soundwave, followed in April 2016 by CrowdAlbum, a "startup that collects photos and videos of performances shared on social networks", and would "enhance the development of products that help artists understand, activate, and monetize their audiences". In November 2016, Spotify acquired Preact, a "cloud-based platform and service developed for companies that operate on subscription models which helps reduce churn and build up their subscriber numbers". In March 2017, Spotify acquired Sonalytic, an audio detection startup, for an undisclosed amount of money. Spotify stated that Sonalytic would be used to improve the company's personalized playlists, better match songs with compositions, and improve the company's publishing data system. Later that month, Spotify also acquired MightyTV, an app connected to television streaming services, including Netflix and HBO Go, that recommends content to users. Spotify intended to use MightyTV to improve its advertising efforts on the free tier of service. In April 2017, they acquired Mediachain, a blockchain startup that had been developing a decentralized database system for managing attribution, and other metadata for media. This was followed in May 2017 with the acquisition of artificial intelligence startup Niland, which uses technology to improve personalisation and recommendation features for users. In November 2017, Spotify acquired Soundtrap, an online music studio startup. On 12 April 2018, Spotify acquired the music licensing platform Loudr. In August 2018, Spotify bought the exclusive rights to The Joe Budden Podcast and expanded the show to a twice-weekly schedule. On 6 February 2019, Spotify acquired the podcast networks Gimlet Media and Anchor FM Inc., with the goal of establishing themselves as a leading figure in podcasting. On 26 March 2019, Spotify announced they would acquire another podcast network, Parcast. On 12 September 2019, Spotify acquired SoundBetter, a music production marketplace for people in the music industry to collaborate on projects, and distribute music tracks for licensing. In October 2021, SoundBetter was sold back to the founders. On 19 November 2019, Spotify announced the acquisition of the exclusive rights to The Last Podcast on the Left. On 5 February 2020, Spotify announced its intent to acquire Bill Simmons' sports and pop culture blog and podcast network The Ringer for an undisclosed amount. On 19 May 2020, Spotify acquired exclusive rights to stream the popular podcast The Joe Rogan Experience beginning in September of that year, under an agreement valued at around US$100 million (equivalent to $119,100,000 in 2024). In November 2020, Spotify announced plans to acquire Megaphone from The Slate Group for US$235 million. In March 2021, Spotify acquired app developer Betty Labs and their live social audio app, Locker Room, On 12 May 2021. Armchair Expert announced on Instagram that the podcast would be available exclusively on Spotify beginning 1 July, saying they would continue to maintain the same creative control over the show after the move. Locker Room was rebranded in June 2021 as Spotify Greenroom, and turned into a Clubhouse competitor. The same month, Spotify acquired Podz, a podcast discovery startup. Also the same month, Spotify bought the exclusive rights to the Call Her Daddy podcast. In November 2021, Spotify acquired audiobook company Findaway, including its publishing imprint OrangeSky Audio. In December 2021, Spotify acquired Whooshkaa, a podcast tech company that develops specialized technology that allows radio broadcasters to easily turn their existing audio content into on-demand podcast programming. In February 2022, Spotify acquired Chartable and Podsights. Both are podcast advertising companies. In 2022, Spotify Greenroom rebranded as Spotify Live, which was subsequently planned to be shut down in April 2023. In June 2022, Spotify acquired Sonantic, a synthetic voice and video developer. In July 2022, Spotify acquired Heardle, a Wordle-inspired music trivia game, for an undisclosed amount; Heardle was shut down in May 2023. In October 2022, Spotify acquired the Dublin-based content moderation startup, Kinzen. In 2023, Spotify merged Anchor into their Spotify for Podcasters tool, a rebranding move and to organize its tools for creating, managing, growing, and monetizing their content in one place. In November 2024, Spotify for Podcasters was rebranded to Spotify for Creators. In November 2025, Spotify acquired music database WhoSampled. In January 2015, Sony announced PlayStation Music, a new music service with Spotify as its exclusive partner. PlayStation Music incorporates the Spotify service into Sony's PlayStation 3 and PlayStation 4 gaming consoles, and Sony Xperia mobile devices. The service launched on 30 March 2015. In March 2017, Spotify announced a partnership with the South by Southwest (SXSW) conference for 2017, presenting specific content in special playlists through an SXSW hub in Spotify's apps. The integration also enabled Spotify within the SXSW GO app to help users discover and explore artists performing at the conference. Two more partnerships were announced in March; one with WNYC Studios, and one with Waze. The WNYC Studios partnership brought various podcasts from WNYC to Spotify, including Note to Self, On the Media and Here's the Thing. Spotify also announced that the third season of WNYC Studios' 2 Dope Queens podcast would premiere with a two-week exclusivity period on the service on 21 March 2017. The Waze partnership allows Waze app users to view directions to destinations within the Spotify app and access their Spotify playlists through the Waze app. In October 2017, Microsoft announced that it would be ending its Groove Music streaming service by December, with all music from users transferring to Spotify as part of a new partnership. In December, Spotify and Tencent's music arm, Tencent Music Entertainment (TME), agreed to swap stakes and make an investment in each other's music businesses. As a result of this transaction, Spotify gained a 9% stake in TME with TME gaining a 7.5% stake in Spotify. In February 2018, Spotify integrated with the gaming-oriented voice chat service Discord on desktop clients, allowing users to display their currently playing song as a rich presence on their profile, and invite other users with Spotify Premium to group "listening parties". In April, Spotify announced a discounted entertainment bundle with video-on-demand provider Hulu, which included discounted rates for university students. In May 2020, Spotify teamed up with ESPN and Netflix to curate podcasts around their Michael Jordan documentary The Last Dance, and in September, Spotify signed a deal with Chernin Entertainment to produce movies and TV shows. In 2020 and 2021, Spotify and DC, a brand at the time under Warner Bros. Entertainment signed deals to create audio shows on the platform around characters such as Catwoman, Wonder Woman, the Riddler, Batgirl, Superman and Lois Lane, among others. In 2022, Spotify became the official streaming partner of FC Barcelona. In May 2022, Spotify announced a partnership with the online game platform and game creation system Roblox, the partnership saw Spotify as the first streaming brand to have a presence within the game with the launch of "Spotify Island". In March 2023, Spotify announced a partnership with Patreon, which Spotify claimed would "enable creators to expand their creative business through direct payments from fans, and allow fans to listen to their Patreon content on Spotify". On 6 October 2025, Spotify announced a partnership with OpenAI to bring music and podcast recommendations inside ChatGPT, allowing Spotify users to discover and queue new music through conversations, rather than search. Listeners will be able to link their Spotify to ChatGPT, asking it to find anything from a specific playlist to a podcast topic. In July 2015, Spotify launched an email campaign to urge its App Store subscribers to cancel their subscriptions and start new ones through its website, bypassing the 30% transaction fee for in-app purchases required for iOS applications by technology company Apple Inc. A later update to the Spotify app on iOS was rejected by Apple, prompting Spotify's general counsel Horacio Gutierrez to write a letter to Apple's then-general counsel Bruce Sewell, stating: "This latest episode raises serious concerns under both U.S. and EU competition law. It continues a troubling pattern of behavior by Apple to exclude and diminish the competitiveness of Spotify on iOS and as a rival to Apple Music, particularly when seen against the backdrop of Apple's previous anticompetitive conduct aimed at Spotify ... we cannot stand by as Apple uses the App Store approval process as a weapon to harm competitors." Sewell responded to the letter: "We find it troubling that you are asking for exemptions to the rules we apply to all developers and are publicly resorting to rumors and half-truths about our service." He also elaborated that "Our guidelines apply equally to all app developers, whether they are game developers, e-book sellers, video-streaming services or digital music distributors; and regardless of whether they compete against Apple. We did not alter our behavior or our rules when we introduced our own music streaming service or when Spotify became a competitor". Furthermore, he stated that "There is nothing in Apple's conduct that 'amounts to a violation of applicable antitrust laws.' Far from it. ... I would be happy to facilitate an expeditious review and approval of your app as soon as you provide us with something that is compliant with the App Store's rules". In the following months, Spotify joined several other companies in filing a letter with the European Union's antitrust body indirectly accusing Apple and Google of "abusing their 'privileged position' at the top of the market", by referring to "some" companies as having "transformed into 'gatekeepers' rather than 'gateways'". The complaint led to the European Union announcing that it would prepare an initiative by the end of 2017 for a possible law addressing unfair competition practices. Spotify released the first version of its Apple Watch app in November 2018, allowing playback control of the iPhone via the watch. Users can also choose which devices to play music on via Bluetooth. In a further escalation of the dispute with Apple, on 13 March 2019, Spotify filed an antitrust complaint with the European Commission over unfair app store practices. Two days later, Apple responded, stating that the claim was misleading rhetoric and that Spotify wanted benefits of a free app without being a free app. Spotify responded with a statement calling Apple a monopolist and stated that they had only filed the complaint as Apple's actions hurt competition and consumers and clearly violated the law. It also said that Apple believed Spotify users on the app store were Apple's customers and not Spotify's. Apple responded to Spotify's claims by counter-claiming that Spotify's market reach and user base would not have been possible without the Apple App Store platform. Additionally, Apple stated that they have attempted to work with Spotify to integrate the service better with Apple's products, such as Siri and Apple Watch. In 2019, under iOS 13, it became possible to play Spotify music using Siri commands. Spotify was one of the first companies to support Epic Games in their lawsuit against Apple, which was filed after Epic also tried to bypass Apple's 30% fee for microtransactions in Fortnite. In September 2020, Spotify, Epic, and other companies founded The Coalition for App Fairness, which aims for better conditions for the inclusion of apps in app stores. On 1 March 2021, Spotify confirmed that its platform would no longer have access to music from artists represented by Kakao Entertainment. However, after talking it out and renewing the contracts between the two, Spotify later announced that they had reached an agreement with Kakao Entertainment, allowing their content to be available once again on the platform across the globe. In November 2021, Spotify hid the "shuffle" button for albums following a request by singer Adele, arguing that tracks in albums are supposed to be played back in the order specified by the artist to "tell a story". In May 2022, Spotify began testing a feature that would allow select artists to promote their NFTs via their profiles. Some artists included in this initial test phase were Steve Aoki and the Wombats. The testing was very limited in nature and was only available on Spotify's Android app in the US. In May 2023, Spotify removed tens of thousands of songs, roughly 7% of the tracks uploaded by the Boomy, due to suspected "artificial streaming", the practice of using online bots to inflate the listening statistics. In 2022, the Swedish daily Dagens Nyheter compared Spotify streaming data against documents retrieved from the Swedish copyright collection society STIM, and found that around twenty songwriters were behind the work of more than five hundred "artists," and that thousands of their tracks were on Spotify and had been streamed millions of times. In December 2024, Harper's Magazine released a report stating that Spotify was padding out playlists with ghost artists created by production companies in order to minimise royalty costs and increase profits. According to the report, the practice started in 2017 with a program called Perfect Fit Content (PFC). In 2025, Spotify donated $150,000 to President Donald Trump's inauguration ceremony, as well as hosted an inauguration-related brunch. Spotify supports integration with DJ software, allowing DJs to mix with music streamed from the platform. This integration was removed in July 2020 but was reintroduced in September 2025, now supporting Serato, Rekordbox, and Djay. Though all three applications support real-time stem separation and mixing, this functionality is disabled on music streamed from Spotify. Caching tracks for offline use is also unsupported in DJ software. In December 2025, over 300 terabytes of files were scraped from Spotify's servers and uploaded to Anna's Archive, totalling 86 million songs uploaded from 2007 to July 2025 that account for 37% of all songs and 99.6% of all listens, as well as 256 million rows of song metadata. Spotify responded to the hacking, condeming it and stating that they have implemented new safeguards against similar attacks and banned user accounts believed to be connected to the leak. Corporate affairs Spotify reported its first profitable year in fiscal 2024. The key trends for Spotify Technology are (as of the financial year ending 31 December): Unionization Spotify recognizes trade unions at its US podcasting subsidiaries The Ringer and Spotify Studios since 2019. In Germany, a works council was established in 2023. Swedish trade unions have unsuccessfully attempted to bargain collectively with Spotify since 2023. Spotify GmbH employees in Berlin established an electoral board in February 2023, which prepared the election for the works council in April.[non-primary source needed] Spotify AB does not recognize any trade unions or have any collective agreements in Sweden. Spotify ended joint-negotiations with the three trade unions Unionen, Akavia, and Engineers of Sweden (both affiliates of SACO) in August 2023. The three unions petitioned Spotify to negotiate back in May. 90% of Swedish workers are covered by collective agreements. In tech companies bargaining coverage is less common. Swedish labor disputes are also happening at Tesla and Klarna as of 2024[update]. In November 2022, Henry Catalini Smith, a Spotify engineer in Malmö, set up the channel #kollektivavtal in the internal company Slack, which means "collective agreement" in Swedish. The channel grew to 2,000 participants. 700 employees have since joined Unionen, with another 100 each joining Engineers of Sweden and Akavia. Catalini Smith no longer works at Spotify. Writers Guild of America, East represents two union affiliates at Spotify Studios and The Ringer. The United Musicians and Allied Workers campaigns for a fairer redistribution and compensation system for musicians. The United Musicians and Allied Workers (UMAW) was established in 2020 during the COVID-19 pandemic. One year later, in 31 cities worldwide, 27,500 musicians joined UMAW's campaign #JusticeAtSpotify to demand a compensation of one cent per audio stream. Moreover, they are asking for a fairer redistribution system, as smaller artists are disproportionately disadvantaged on Spotify. One month after Spotify acquired Gimlet Media in February 2019, 75% of staff at Gimlet Media went public, signing union cards and seeking voluntary recognition. In August, The Ringer's editorial staff voted to unionize with the Writers Guild of America, East (before it was owned by Spotify). The union was voluntarily recognized by Ringer management four days later. In February 2020, Spotify announced it was acquiring The Ringer, and inheriting the previously established union. A year later, in April 2021, writers and producers ratified their first collective agreement with Gimlet Media and Ringer. It would last 3 years, with minimum base salary of $57,000 for Ringer staff and $73,000 for Gimlet producers. There was no provision regarding worker ownership of content created, one of the initial demands. Spotify acquired Parcast in March 2019. Six months later, Parcast workers went public with their union drive, which was recognized a month later by Parcast. After 15 months of bargaining, the Parcast union consisting of 56 workers ratified their first collective agreement, which included a minimum salary of $70,000, annual increases and affirmative action while hiring. In March 2024, The Writers Guild of America, East ratified a collective agreement with Ringer and Spotify Studios (Spotify Studios was formed as a merger of Gimlet Media, Parcast and their respective unions) which increased minimum base salaries to $65,000, protections for migrant employees and included safe-guards against usage of artificial intelligence to create "digital replicas" of their voices. Business model Spotify operates under a freemium business model (basic services are free, while additional features are offered via paid subscriptions). Spotify generates revenue by selling premium streaming subscriptions to users and advertising placements to third parties. Some of the premium options users may choose from include individual, duo, family, and student. In December 2013, the company launched a new website, "Spotify for Artists", explaining its business model and revenue data. Spotify gets its content from major record labels as well as independent artists and pays copyright holders royalties for streaming music. The company pays about 70% of its total revenue to rights holders. Of that amount, about 58.5% of its total revenue goes to the owners of sound recording copyrights, and the other 12% goes to the owners of musical compositions. In the United States, this is split between 6% mechanical royalties (paid via the Mechanical Licensing Collective) and 6% performance royalties (paid via performing rights organizations or PROs).[c] Spotify for Artists states that the company does not have a fixed per-play rate; instead, it considers factors such as the user's home country and the individual artist's royalty rate. Rights holders received an average per-play payout between $.000029 and $.0084. Today, royalties on all streaming services, including Spotify are paid on a per user basis not per stream as this allows the artists who users listen to the most to receive the largest percentage of the payouts. In 2013, Spotify revealed that it paid artists an average of $0.007 per stream. Music Week editor Tim Ingham commented that while the figure may "initially seem alarming", he noted: "Unlike buying a CD or download, streaming is not a one-off payment. Hundreds of millions of streams of tracks are happening every day, which quickly multiplies the potential revenues on offer—and is a constant long-term source of income for artists." According to Ben Sisario of The New York Times, approximately 13,000 out of seven million artists (0.19%) on Spotify generated $50,000 (equivalent to $59,540 in 2024) or more in payments in 2020. In November 2023, Spotify announced a new royalty model taking effect in 2024, aiming to reduce the amount of "fraudulent" royalties collected from "functional" non-music tracks with short lengths (such as environmental sounds and white noise). Under the model, all royalties go to songs with at least 1,001 listens. Meaning that tracks must reach at least 1,000 listens in 12 months as well as a minimum number of unique listeners to become eligible for sound recording royalties, "functional" tracks will require a longer amount of play time to count as a listen, and distributors will face reprimands if their content is responsible for generating "fraudulent" royalties. This eligibility rule only applies to royalties for sound recordings, not musical compositions. The changes faced a mixed reaction from the music industry, who believed that it would be detrimental to emerging musicians, but would make a larger share of total royalty payments available to musicians. As of August 2022, the two Spotify subscription tiers were: *Still shows sponsored content, such as Spotify Showcase banners. None of these subscriptions limit listening time. In March 2014, Spotify introduced a discounted Premium subscription tier for students in which students in the United States enrolled in a university pay half-price for a Premium subscription. In April 2017, the discount was expanded to 33 more countries. Spotify introduced a Family subscription in October 2014, which allows up to 5 family members to have a premium subscription. In May 2016, the limit was changed to 6 family members, and the price was reduced. The Family subscription provides access to Spotify Kids. In November 2018, Spotify announced it was opening up Spotify Connect to all of the users using its Free service, however, these changes still required products supporting Spotify Connect to support the latest software development kit. In July 2020, Spotify added another tier, Premium Duo. This is aimed at couples and it lets up to two people (living at the same address) share a subscription. In February 2021, Spotify announced their plans to introduce a HiFi subscription, to offer listening in high fidelity, lossless sound quality. On 10 January 2022, the HiFi tier was delayed indefinitely. On 10 September 2025, it was announced that Lossless would be rolling out to Premium listeners gradually to more than 50 markets through October. In August 2021, Spotify launched a test subscription tier called Spotify Plus. The subscription costs $0.99 and is supposed to be a combination of the free and premium tiers. Subscribers to this plan will still receive ads but will get the ability to listen to songs without shuffle mode and skip any number of tracks. The company reported that the tier conditions may change before its full launch. In September 2025, Spotify began removing restrictions preventing free users from listening to specific tracks. Mobile listeners are now able to tap on any song or search for what they’d like to play. This was previously limited to desktop and iPad users only. Spotify also offers an "Audiobook Access" option giving paying subscribers access to their audiobook catalog for a limited time each month. In 2008, just after launch, the company made a loss of 31.8 million Swedish kronor (US$4.4 million). In October 2010, Wired reported that Spotify was making more money for labels in Sweden than any other retailer "online or off". Years after growth and expansion, a November 2012 report suggested strong momentum for the company. In 2011, it reported a near US$60 million net loss from revenue of $244 million (equivalent to $334,700,000 in 2024), while it was expected to generate a net loss of $40 million (equivalent to $53,860,000 in 2024) from revenue of $500 million in 2012 (equivalent to $673,200,000 in 2024). Another source of income was music purchases from within the app, however this service was removed in January 2013. In May 2016, Spotify announced "Sponsored Playlists", a monetisation opportunity in which brands can specify the audiences they have in mind, with Spotify matching the marketer with suitable music in a playlist. That September, Spotify announced that it had paid a total of over $5 billion to the music industry. In June 2017, as part of renegotiated licenses with Universal Music Group and Merlin Network, Spotify's financial filings revealed its agreement to pay more than $2 billion (equivalent to $2,510,000,000 in 2024) in minimum payments over the next two years. As of 2017[update], Spotify was not yet a profitable company. Spotify's revenue for Q1 2020 amounted to €1.85 billion ($2 billion). A large part of this sum, €1.7 billion ($1.84 billion), came from Spotify Premium subscribers. Gross profit in the same quarter amounted to €472 million ($511 million), with an operating loss of €17 million ($18 million). Despite subscriber and podcasts growth, during Q2 of 2020, Spotify reported a loss of €356 million (€1.91 per share). The "deeper" loss came as a result of the company's tax debt to over one-third of its employees in Sweden. Spotify became profitable for the first time in 2024 with a net profit of €1.14 billion ($1.17 billion). By 2024, Spotify added 15 hours of audiobook listening to its premium tier. In order to balance the expense of licensing audiobooks and music together, the company started a new "bundle" royalty rate to songwriters. Billboard estimated that the changes will pay songwriters and publishers $150 million less "from premium, duo and family plans for the first 12 months that this is in effect, compared to what they would have earned if these three subscriptions were never bundled." In February 2010, Spotify received a small investment from Founders Fund, where board member Sean Parker was recruited to assist Spotify in "winning the labels over in the world's largest music market". In June 2011, Spotify secured $100 million of funding (equivalent to $137,200,000 in 2024) and planned to use this to support its US launch. The new round of funding valued the company at $1 billion. A Goldman Sachs-led round of funding closed in November 2012, raising around $100 million (equivalent to $134,600,000 in 2024) at a $3 billion valuation (equivalent to $4,039,000,000 in 2024). In April 2015, Spotify began another round of fundraising, with a report from The Wall Street Journal stating it was seeking $400 million (equivalent to $515,700,000 in 2024), which would value the company at $8.4 billion (equivalent to $10,831,000,000 in 2024). The financing was closed in June 2015, with Spotify raising $526 million (equivalent to $678,200,000 in 2024), at a value of $8.53 billion (equivalent to $10,998,000,000 in 2024). In January 2016, Spotify raised another $500 million (equivalent to $638,600,000 in 2024) through convertible bonds. In March 2016, Spotify raised $1 billion (equivalent to $1,277,000,000 in 2024) in financing by debt plus a discount of 20% on shares once the initial public offering (IPO) of shares takes place. The company was, according to TechCrunch, planning to launch on the stock market in 2017, but in 2017 it was seen as planning on doing the IPO in 2018 to "build up a better balance sheet and work on shifting its business model to improve its margins". In March 2009, Spotify began offering music downloads in the United Kingdom, France, and Spain. Users could purchase tracks from Spotify, which partnered with 7digital to incorporate the feature. The ability to purchase and download music tracks via the app was removed on 4 January 2013. In November 2015, Spotify introduced a "Fan Insights" panel in limited beta form, letting artists and managers access data on monthly listeners, geographical data, demographic information, music preferences and more. In April 2017, the panel was upgraded to leave beta status, renamed as "Spotify for Artists", and opened to all artists and managers. Additional features include the ability to get "verified" status with a blue checkmark on an artist's profile, receiving artist support from Spotify, customising the profile page with photos and promoting a certain song as their "pick". In September 2018, Spotify announced "Upload Beta", allowing artists to upload directly to the platform instead of going through a distributor or record label. The feature was rolled out to a small number of US-based artists by invitation only. Uploading was free and artists received 100% of the revenue from songs they uploaded; artists were able to control when their release went public. On 1 July 2019, Spotify deprecated the program and announced plans to stop accepting direct uploads by the end of that month and eventually remove all content uploaded in this manner. In June 2017, Variety reported that Spotify would announce "Secret Genius", a new initiative aimed at highlighting songwriters and producers, and the effect those people have on the music industry and the artists' careers. The project, which would feature awards, "Songshops" songwriting workshops, curated playlists, and podcasts, is an effort to "shine a light on these people behind the scenes who play such a big role in some of the most important moments of our lives. When the general public hears a song, they automatically associate it with the artist who sings it, not the people behind the scenes who make it happen, so we thought the title Secret Genius was appropriate", Spotify's former Global Head of Creator Services Troy Carter told Variety the first awards ceremony would take place in late 2017,[needs update] and was intended to honour "the top songwriters, producers and publishers in the industry as well as up-and-coming talent". Additionally, as part of "The Ambassador Program", 13 songwriters would each host a Songshop workshop, in which their peers would collaboratively attempt to create a hit song, with the first workshop taking place in Los Angeles in June 2017. In October 2017, Spotify launched "Rise", a program aimed at promoting emerging artists. In February 2020, Spotify announced it would be featuring new songwriter pages and 'written by' playlists. This was aimed at giving fans a behind the scenes look at the process of some of their favorite songwriters. Initial pages added included Justin Trantor, Meghan Trainor, and Missy Elliott. Spotify thereafter announced it was planning to add more of these pages and playlists to highlight songwriters. In January 2021, Spotify made a selection of audiobooks available on the platform as a test of developing a greater breadth of content for users. The addition of audiobooks to the service would create similar offerings to that of Amazon's Audible. In 2020, Spotify partnered with Wizarding World to release a series of recorded readings of Harry Potter and the Philosopher's Stone, by various stars of the franchise. In November 2023, Spotify expanded free access to 200,000 audiobooks for Spotify Premium subscribers. In April 2024, Spotify expanded access to the audiobooks from the US, UK and Australia to include Canada, Ireland and New Zealand. The company also announced an expansion of its book catalogue to 250,000 books. On 31 January 2018, Spotify started testing a new Pandora-styled standalone app called Stations by Spotify for Australian Android users. It featured 62 music channels, each devoted to a particular genre. Spotify itself has two channels named after its playlists that link directly to the users' profile: "Release Radar" and "Discover Weekly". The aim was to help users to listen to the music they want without information overload or spending time building their own playlists. At launch, the skipping feature was not featured to "reinforce the feel of radio", but it was quietly added later and with no limits. Songs can be "loved" but cannot be "hated". If a song is "loved", a custom radio channel will be created based on it, and when there are at least 15 of these songs, a "My Favourites" channel is unlocked. The standalone app was made available to all iOS and Android users in the United States since 4 June 2019. Spotify announced the app would be shut down on 16 May 2022. The company said users would be able to login into the main Spotify app with their Stations account and transfer their stations into Spotify. Platforms Spotify has client software currently available for Windows, macOS, Wear OS, Android, iOS, watchOS, iPadOS, PlayStation 3, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S game consoles. There is an official, although unsupported Linux version. Spotify also offers a proprietary protocol known as "Spotify Connect", which lets users listen to music through a wide range of entertainment systems, including speakers, receivers, TVs, cars, and smartwatches. Spotify also has a web player (open.spotify.com). Offline Music listening is possible on watchOS and more recently added to Google's WearOS for those with premium subscriptions. Unlike the apps, the web player does not have the ability to download music for offline listening. In June 2017, Spotify became available as an app through Windows Store. In Spotify's apps, music can be browsed or searched for via various parameters, such as artist, album, genre, playlist, or record label. Users can create, edit and share playlists, share tracks on social media, and make playlists with other users. Spotify provides access to over 100 million songs, 7 million podcasts, and 4 billion playlists. In June 2012, Soundrop became the first Spotify app to attract major funding, receiving $3 million (equivalent to $4,039,000 in 2024) from Spotify investor Northzone. In November 2011, Spotify introduced a Spotify Apps service that made it possible for third-party developers to design applications that could be hosted within the Spotify computer software. The applications provided features such as synchronised lyrics, music reviews, and song recommendations. However, after the June 2014 announcement of a Web API that allowed third-party developers to integrate Spotify content in their own web applications, the company discontinued its Spotify Apps platform in October, stating that its new development tools for the Spotify web player fulfilled many of the advantages of the former Spotify Apps service that allows users to access the service directly from their web browser without downloading the app. In April 2012, Spotify introduced a "Spotify Play Button", an embeddable music player that can be added to blogs, websites, or social media profiles, that lets visitors listen to a specific song, playlist, or album without leaving the page. The following November, the company began rolling out a web player, with a similar design to its computer programs, but without the requirement of any installation. In December 2012, Spotify introduced a "Follow" tab and a "Discover" tab, along with a "Collection" section. "Follow" lets users follow artists and friends to see what they are listening to, while "Discover" directs users to new releases as well as music, review, and concert recommendations based on listening history. Users can add tracks to a "Collection" section of the app, rather than adding them to a specific playlist. The features were announced by CEO Daniel Ek at a press conference, with Ek saying that a common user complaint about the service was that "Spotify is great when you know what music you want to listen to, but not when you don't". In May 2015, Spotify announced a new "Home" start-page that could recommend music. The company also introduced "Spotify Running", a feature aimed at improving music while running with music matched to running tempo, and announced that podcasts and videos ("entertainment, news and clips") would be coming to Spotify, along with "Spotify Originals" content. In December 2015, Spotify debuted Spotify Wrapped, a program that creates playlists based on each user's most listened-to songs from the year. Users then can view and save this playlist at the end of the year. In January 2016, Spotify and music annotation service Genius formed a partnership, bringing annotation information from Genius into infocards presented while songs are playing in Spotify. The functionality is limited to selected playlists and was only available on Spotify's iOS app at launch, being expanded to the Android app in April 2017. This feature was known as "Behind the Lyrics". As of 18 November 2021,[update] "Behind the Lyrics" has been replaced with auto-generated real-time lyrics due to consumer demand. The feature is powered by lyrics providers Musixmatch and PetitLyrics [ja] (only in Japan). In May 2017, Spotify introduced Spotify Codes for its mobile apps, a way for users to share specific artists, tracks, playlists or albums with other people. Users find the relevant content to share and press a "soundwave-style barcode" on the display. A camera icon in the apps' search fields lets other users point their device's camera at the code, which takes them to the same content. In January 2019, Spotify introduced Car View for Android, allowing devices running Android to have a compact Now Playing screen when the device is connected to a car's Bluetooth. Also in January 2019, Spotify beta-tested its Canvas feature, where artists or labels can upload looping 3 to 8-second moving visuals to their tracks, replacing album covers in the "Now Playing" view; users have the option to turn off this feature. Canvas is only available for Spotify's iOS and Android mobile apps. Months later, Spotify tested its own version of stories (the sharing format popularized by social apps) known as "Storyline", and the focus is on allowing artists to share their own insights, inspiration, details about their creative process or other meanings behind the music. In March 2021, Spotify announced an upcoming option for higher-resolution sound, Spotify Hi-Fi. In September 2025, Lossless audio, which allows Spotify users to listen to songs in very high quality, became available through Spotify premium. In July 2015, Spotify launched Discover Weekly, a playlist generated weekly. Updated on Mondays, it provides users with music recommendations. In December 2015, Quartz reported that songs in Discover Weekly playlists had been streamed 1.7 billion times. In March 2016, Spotify launched six playlists branded as Fresh Finds, including the main playlist and Fire Emoji, Basement, Hiptronix, Six Strings, and Cyclone (hip-hop, electronic, pop, guitar-driven, and experimental music respectively). The playlists spotlight songs by lesser-known musicians and their songs. In August 2016, Spotify launched Release Radar, a personalized playlist that allows users to stay up-to-date on new music released by the artists they listen to the most. It also helps users discover new music, by mixing in other artists' music. The playlist is updated every Friday, and is a maximum of two hours in length. Spotify provides artists taking part in RADAR with resources and access to integrated marketing opportunities to help them boost their careers, in addition to expanded reach and exposure to 178 markets worldwide. In 2016, Spotify introduced its Daily Mix feature, which creates playlists of music that a user has previously listened to on the platform. In 2017, Spotify introduced RapCaviar, a hip-hop playlist. Rap Caviar had 10.9 million followers by 2019, becoming one of Spotify's Top 5 playlists. RapCaviar was originally curated by Tuma Basa. It was relaunched by Carl Chery in 2019. In June 2019, Spotify launched a custom playlist titled "Your Daily Drive" that closely replicates the drive time format of many traditional radio stations. It combines short-form podcast news updates from The Wall Street Journal, NPR, and PRI with a mix of a user's favorite songs and artists interspersed with tracks the listener has yet to discover. "Your Daily Drive", which is found in a user's library under the "Made For You" section, updates throughout the day. In May 2020, Spotify introduced the Group Session feature. This feature allows two or more Premium users in the same location to share control over the music that is being played. The Group Session feature was later expanded to allow any Premium user to join/participate in a Group Session, with a special link the host can send to participants. In July 2021, Spotify launched the "What's New" feed, a section that collects all new releases and episodes from artists and podcasts that the user follows. The feature is represented by a bell icon on the app's main page and is available on iOS and Android. In November 2021, Spotify launched the City and Local Pulse charts, aimed at representing the songs listened to in major cities around the world. The charts are available for 200 cities with the most listeners on Spotify. In 2023, Spotify launched additional features to help independent artists distributing their music on the platform reach a wider array of potential fans. One such feature is a tool that Spotify rolled out in March 2023, called "Discovery Mode". Discovery Mode allows artists who meet certain criteria and have a Spotify For Artists account to submit qualifying songs for Spotify's in-house promotion services. Spotify helps place songs campaigned through Discovery Mode on listeners' personal algorithmic playlists. Discovery Mode does not require an upfront budget. Instead, a 30% commission is applied to recording royalties generated from all streams of selected songs in Discovery Mode contexts—Spotify Radio and Autoplay. All other streams of selected songs outside of Spotify Radio and Autoplay remain commission-free. In September 2023, Spotify introduced its latest playlist update 'Daylist', a new kind of playlist which adapts to the user's mood throughout the day. In April 2025, Spotify expanded its AI Playlist beta feature to Premium listeners in over 40 new English-speaking markets, including countries in Africa, Asia, Europe, and the Caribbean. The tool allows users to generate personalized playlists from text prompts describing genres, moods, artists, activities, or creative ideas like animals, movie characters, colors, or emojis. Users can refine playlists with additional instructions, such as "more upbeat" or "happier songs". As of 24 April 2025, the feature was available in markets including Antigua and Barbuda, Australia, Bahamas, Barbados, Belize, Botswana, Burundi, Canada, Curaçao, Dominica, Eswatini, Fiji, Ghana, Grenada, Guyana, Ireland, Jamaica, Kenya, Kiribati, Lesotho, Liberia, Malawi, Malta, Marshall Islands, Namibia, Nauru, New Zealand, Nigeria, Palau, Papua New Guinea, Philippines, Rwanda, Saint Kitts and Nevis, Saint Vincent and the Grenadines, Samoa, Sierra Leone, Singapore, South Africa, Solomon Islands, Tanzania, Tonga, Uganda, United Kingdom, United States, Vanuatu, Zambia, and Zimbabwe. Spotify has experimented with different limitations to users' listening on the Free service tier. In April 2011, Spotify announced via a blog post that they would drastically cut the amount of music that free members could access, effective 1 May 2011. The post stated that all free members would be limited to ten hours of music streaming per month, and in addition, individual tracks were limited to five plays. New users were exempt from these changes for six months. In March 2013, the five-play individual track limit was removed for users in the United Kingdom, and media reports stated that users in the United States, Australia, and New Zealand never had the limit in the first place. In December 2013, CEO Daniel Ek announced that Android and iOS smartphone users with the free service tier could listen to music in Shuffle mode, a feature in which users can stream music by specific artists and playlists without being able to pick which songs to hear. Mobile listening previously was not allowed in Spotify Free accounts. Ek stated that "We're giving people the best free music experience in the history of the smartphone." This limitation does not apply to Android and iOS tablets, or computers. In January 2014, Spotify removed all time limits for Free users on all platforms, including on computers, which previously had a 10-hour monthly listening limit after a 6-month grace period. In April 2018, Spotify began to allow Free users to listen on-demand to whatever songs they want for an unlimited number of times, as long as the song is on one of the user's 15 personalized discovery playlists. Before May 2020, all service users were limited to 10,000 songs in their library, after which they would receive an "Epic collection, friend" notification and would not be able to save more music to their library. Spotify later removed this limit. Spotify is proprietary and uses digital rights management (DRM) controls. Spotify's terms and conditions do not permit users to reverse-engineer the application. Spotify allows users to add local audio files for music not in its catalog into the user's library through Spotify's desktop application, and then allows users to synchronize those music files to Spotify's mobile apps or other computers over the same Wi-Fi network as the primary computer by creating a Spotify playlist, and adding those local audio files to the playlist. Audio files must either be in the .mp3, .mp4 (.mp4 files that have video streams are not supported), or .m4p media formats. This feature is available only for Premium subscribers. Spotify has a median playback latency of 265 ms (including local cache). In April 2014, Spotify moved away from the peer-to-peer (P2P) system they had used to distribute music to users. Previously, a desktop user would listen to music from one of three sources: a cached file on the computer, one of Spotify's servers, or from other subscribers through the P2P system. P2P, a well-established Internet distribution system, served as an alternative that reduced Spotify's server resources and costs. However, Spotify ended the P2P setup in 2014, with Spotify's Alison Bonny telling TorrentFreak: "We're gradually phasing out the use of our desktop P2P technology which has helped our users enjoy their music both speedily and seamlessly. We're now at a stage where we can power music delivery through our growing number of servers and ensure our users continue to receive a best-in-class service." Originally, Spotify had their own servers; however, in 2016, most of their infrastructure was migrated to Google Cloud. Spotify first announced a voice-activated music-streaming gadget for cars in May 2019. Named the Car Thing, it represents the music-streaming service's first entry into hardware devices. In early 2020, as part of filings to the Federal Communications Commission (FCC), submitted images of the device that make it seem much more like a miniature infotainment screen. In April 2021, Spotify rolled out its own voice assistant with the hands-free wake word: "Hey Spotify". Using this, users can perform various actions such as pulling playlists, launching radio stations, playing or pausing songs. This voice-based virtual assistant may be intended more towards Spotify's own hardware such as its "Car Thing". The company discontinued the device in July 2022. In May 2024 Spotify sent out announcements to Car Thing owners, stating that the hardware would fully stop working on 9 December 2024. Ahead of Spotify Wrapped 2024, Spotify changed its API so that modded versions of the client would not work. Spotify had added social media-related features to its platform, especially to compete for ad revenue and to increase its media-related users. Spotify users, like on other social media websites, can share playlists, do direct messaging, have a discovery feed, comment on podcasts, share playlists including collaborative ones, do polls and Q&As through podcasters, follow friends and artists, create artist profiles, have a 'for you page', and create a user profile. Artists through Spotify can sell tickets to their music venue, promote their merchandise, add their videos as stories through Spotify clips, and promote new releases of their work through countdown pages. Spotify also has a 'blend' feature in which two users' music preferences are mixed in a shared playlist. Spotify users can also 'jam' sessions where users can create and contribute live music playing sessions with their Spotify friends. Spotify Wrapped had generated two billion impressions worldwide, and more ninety million Spotify users share their Spotify Wrapped statistics online, especially through social media. Artists through social media platforms, especially TikTok, often got their popularity boost through Spotify. For example, Lil Nas X's Old Town Road and Doja Cat's Say So started their popularity through TikTok but through Spotify playlists had their popularity accelerated. This furthering of Doja Cat's popularity through Spotify helped with Doja Cat's Paint the Town Red on 21 August 2023 becoming the #1 song on Spotify. This, in turn, made it the first time that a female solo rap artist achieved such a top spot in Spotify. However, there has been some concern about the music streaming platform's social media features. Among them is an increased lack of non-online social spaces, an increased lack of privacy from strangers, and a source of unbridled expression particularly with its media-related podcasts. Geographic availability The company is incorporated in Luxembourg as Spotify Technology S.A., and headquartered in Stockholm, Sweden, with offices in 16 countries around the world. As of December 2022, Spotify is available in 184 markets across Europe, Africa, the Americas, Asia, and Oceania. Despite the extensive global coverage, the service remains unavailable in several countries and territories, including Afghanistan, British Indian Ocean Territory, Central African Republic, China, Cuba, Eritrea, Iran, Myanmar, North Korea, Russia, Somalia, South Sudan, Sudan, Syria, Turkmenistan, and Yemen. While Spotify's core music streaming service is accessible in all of its active markets, podcasting services are not available in Bangladesh, Belarus, the Democratic Republic of the Congo, Ethiopia, Iraq, Kazakhstan, Kyrgyzstan, Libya, Moldova, Pakistan, the Republic of the Congo, Sri Lanka, Tajikistan, Uganda, and Venezuela. Following the Russian invasion of Ukraine, it temporarily closed its office in Russia and indefinitely suspended all of its services in the country. In 2023, it announced that it would leave Uruguay due to a copyright law. However, it reversed its decision a few weeks later. Accolades In September 2010, the World Economic Forum (WEF) selected Spotify as a Technology Pioneer for 2011. Reception Spotify has attracted significant criticism since its 2006 launch. The primary point of criticism centers around what artists, music creators, and the media have described as "unsustainable" compensation. Unlike physical sales or legal downloads (both of which were the main medium of listening to music at the time), which pay artists a fixed amount per song or album sold, Spotify pays royalties based on their "market share": the number of streams for their songs as a proportion of total songs streamed on the service. Spotify distributes approximately 70% of its total revenue to rights-holders, who will then pay artists based on their individual agreements. Worldwide, 30,000 musicians have joined the organization UnionOfMusicians (UMAW). UMAW organized protests in 31 cities in March 2021 and its campaign #JusticeAtSpotify is demanding more transparency and a compensation of one cent per stream. Spotify has been criticised by artists and producers including Taylor Swift and Thom Yorke, who have argued that Spotify does not fairly compensate musicians, and both withdrew their music from the service. Their catalogues returned to the service in 2017. While the streaming music industry in general faces the same critique about inadequate payments, Spotify, being the leading service, faces particular scrutiny due to its free service tier, allowing users to listen to music for free, though with advertisements between tracks. The free service tier has led to a variety of major album releases being delayed or withdrawn from the service. In response to the allegations about unfair compensation, Spotify claims that it is benefitting the industry by migrating users away from unauthorized copying and less monetized platforms to its free service tier, and then downgrades that service until they upgrade to paid accounts. A study has shown that record labels keep a high amount of the money earned from Spotify, and the CEO of Merlin Network, a representative body for over 10,000 independent labels, has also observed significant yearly growth rates in earnings from Spotify, while clarifying that Spotify pays labels, not artists. In 2017, as part of its efforts to renegotiate license deals for an interest in going public, Spotify announced that artists would be able to make albums temporarily exclusive to paid subscribers if the albums are part of Universal Music Group or the Merlin Network. In 2016, Spotify was criticized for allegedly making certain artists' music harder to find than others, as these artists would release their music to the rival streaming service Apple Music before releasing it to Spotify. In May 2018, Spotify attracted criticism for its "Hate Content & Hateful Conduct policy" that removed the music of R. Kelly and XXXTentacion from its editorial and algorithmic playlists because "When we look at promotion, we look at issues around hateful conduct, where you have an artist or another creator who has done something off-platform that is so particularly out of line with our values, egregious, in a way that it becomes something that we don't want to associate ourselves with." R. Kelly has faced accusations of sexual abuse, while XXXTentacion was on trial for domestic abuse in a case that did not reach a judgement before his death that June. This policy was revoked in June because the company deemed the original wording to be too "vague"; they stated that "Across all genres, our role is not to regulate artists. Therefore, we are moving away from implementing a policy around artist conduct". However, artists such as Gary Glitter and Lostprophets are still hidden from Spotify's radio stations. According to some computer science and music experts, various music communities are often ignored or overlooked by music streaming services such as Spotify. The most commonly perceived error is said to be caused by a lack of diverse scope within curation staff, including overlooking mainstay artists in large genres, potentially causing a categorical homogenization of musical styles; even impacting mainline artists like within hip hop with A Tribe Called Quest. In March 2021, David Dayen argued in The American Prospect that musicians were in peril due to monopolies in streaming services like Spotify. Daniel Ek, co-founder and CEO of Spotify, discussed "what he called an artist-friendly streaming solution". He explained, "An extension of the internet radio craze of the early 2000s, Spotify would license content from record labels, and then support artists as people listened to their music." In January 2022, 270 scientists, physicians, professors, doctors, and healthcare workers wrote an open letter to Spotify expressing concern over "false and societally harmful assertions" on Joe Rogan's podcast, The Joe Rogan Experience, and asked Spotify to "establish a clear and public policy to moderate misinformation on its platform". The 270 signatories objected to Rogan broadcasting COVID-19 misinformation, citing "a highly controversial" episode featuring guest Robert Malone (#1757). On 26 January, Neil Young removed his music from Spotify after they refused to remove the podcast. Other artists and podcasters, such as Joni Mitchell, Nils Lofgren, Brené Brown, and Crosby, Stills, & Nash, also announced a boycott of Spotify. Spotify promised to add content advisories for anything containing discussions related to COVID-19 and posted additional rules. In 2024, Spotify Wrapped, which has become a key marketing strategy for the company, was criticised for appearing to rely on AI-generated content and producing uninteresting insights. In July 2025, several artists joined a boycott of Spotify because Spotify CEO and co-founder Daniel Ek raised another $600 million in support of German defense company Helsing through his investment fund. Helsing is an AI defense software company that also engages in military strike drone manufacturing. In 2021, a conspiracy theory began to take hold that Spotify was filling its playlists in some genres (including jazz, chill and "peaceful piano") with stock music attributed to a handful of little-known musicians, mostly Swedish, in an effort to meet demand for background music. In 2022, an investigation by the Swedish daily Dagens Nyheter found that around twenty songwriters were behind the work of more than five hundred "artists", and that thousands of their tracks were on Spotify and had been streamed millions of times, benefitting those artists versus other artists who might otherwise have supplied the background or ambient music. In the 2025 book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, journalist Liz Pelly speculated that Spotify allowed these artists to put their music on Spotify. In doing so, if users chose to listen to one set of these artists' music, it effectively reduced the number of plays for other musicians. Pelly advocated for a move away from the current royalty model that says the artists who people listen to the most should make the most money toward a more generalized approach where the decision for who gets paid rests with industry or government bodies, not audiences. Spotify does not sell user data but did begin selling broader trend data, often coined "streaming intelligence" to marketing firms in 2016, allowing for the data to be available directly to clients. According to journalist and author Liz Pelly, Spotify, "under the guise of AI-powered recommendations has developed a surveillance apparatus driven by emotion profiling and pseudoscience." In July 2025, a leak containing the listening data of several politicians, journalists, and businesspeople, titled the Panama Playlists, was released online. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Oral_tradition] | [TOKENS: 8891]
Contents Oral tradition Oral tradition, or oral lore, is a form of human communication in which knowledge, art, beliefs, ideas and culture are received, preserved, and transmitted orally from one generation to another. The transmission is through speech or song and may include folktales, ballads, chants, prose or poetry. The information is mentally recorded by oral repositories, sometimes termed "walking libraries", who are usually also performers. Oral tradition is a medium of communication for a society to transmit oral history, oral literature, oral law and other knowledge across generations without a writing system, or in parallel to a writing system. It is the most widespread medium of human communication. They often remain in use in the modern era throughout for cultural preservation. Religions such as Buddhism, Hinduism, Catholicism, and Jainism have used oral tradition, in parallel to writing, to transmit their canonical scriptures, rituals, hymns and mythologies. African societies have broadly been labelled oral civilisations, contrasted with literate civilisations, due to their reverence for the oral word and widespread use of oral tradition.[a] Oral tradition is memories, knowledge, and expression held in common by a group over many generations: it is the long preservation of immediate or contemporaneous testimony. It may be defined as the recall and transmission of specific, preserved textual and cultural knowledge through vocal utterance. Oral tradition is usually popular, and can be exoteric or esoteric. It speaks to people according to their understanding, unveiling itself in accordance with their aptitudes.: 168 As an academic discipline, oral tradition refers both to objects and methods of study. It is distinct from oral history, which is the recording of personal testimony of those who experienced historical eras or events. Oral tradition is also distinct from the study of orality, defined as thought and its verbal expression in societies where the technologies of literacy (writing and print) are unfamiliar. Folklore is one albeit not the only type of oral tradition. History According to John Foley, oral tradition has been an ancient human tradition found in "all corners of the world". Modern archaeology has been unveiling evidence of the human efforts to preserve and transmit arts and knowledge that depended completely or partially on an oral tradition, across various cultures: The Judeo-Christian Bible reveals its oral traditional roots; medieval European manuscripts are penned by performing scribes; geometric vases from archaic Greece mirror Homer's oral style. (...) Indeed, if these final decades of the millennium have taught us anything, it must be that oral tradition never was the other we accused it of being; it never was the primitive, preliminary technology of communication we thought it to be. Rather, if the whole truth is told, oral tradition stands out as the single most dominant communicative technology of our species as both a historical fact and, in many areas still, a contemporary reality. — John Foley, Signs of Orality Before the introduction of text, oral tradition remained the only means of communication in order to establish societies as well as its institutions. Despite widespread comprehension of literacy in the recent century, oral tradition remains the dominant communicative means within the world. In Africa, the oral tradition includes proverbs, folktales, songs, dances, customs, traditional medicine, religious practices, and cultural sayings that are told and expressed to teach lessons about life, social systems, religion, and spirituality. All indigenous African societies use oral tradition to learn their origin and history, civic and religious duties, crafts and skills, as well as traditional myths and legends. It is also a key socio-cultural component in the practice of their traditional spiritualities, as well as mainstream Abrahamic religions. Jan Vansina differentiates between oral and literate civilisations, stating: "The attitude of members of an oral society toward speech is similar to the reverence members of a literate society attach to the written word. If it is hallowed by authority or antiquity, the word will be treasured." For centuries in Europe, all data felt to be important were written down, with the most important texts prioritised, such as Bible, and only trivia, such as song, legend, anecdote, and proverbs remained unrecorded. In Africa, all the principal political, legal, social, and religious texts were transmitted orally. When the Bamums in Cameroon invented a script, the first to be written down was the royal chronicle and the code of customary law. Most African courts had archivists who learnt by heart the royal genealogy and history of the state, and served as its unwritten constitution. The performance of a tradition is accentuated and rendered alive by various gesture, social conventions and the unique occasion in which it is performed. Furthermore, the climate in which traditions are told influences its content. In Burundi, traditions were short because most were told at informal gatherings and everyone had to have their turn; in neighbouring Rwanda, many narratives were longer because a one-man professional had to entertain his patron for a whole evening, with every production checked by fellow specialists and errors punishable. Frequently, glosses or commentaries were presented parallel to the narrative, sometimes answering questions from the audience to ensure understanding, although often someone would learn a tradition without asking their master questions and not really understand the meaning of its content, leading them to speculate in the commentary. Oral traditions only exist when they are told, except for in people's minds, and so the frequency of telling a tradition aids its preservation. These African ethnic groups also utilize oral tradition to develop and train the human intellect, and the memory to retain information and sharpen imagination. Perhaps the most famous repository of oral tradition is the west African griot (named differently in different languages). The griot is a hereditary position and exists in Dyula, Soninke, Fula, Hausa, Songhai, Wolof, Serer, and Mossi societies among many others, although more famously in Mandinka society. They constitute a caste and perform a range of roles, including as a historian or library, musician, poet, mediator of family and tribal disputes, spokesperson, and served in the king's court, not dissimilar from the European bard. They keep records of all births, death, and marriages through the generations of the village or family. When Sundiata Keita founded the Mali Empire, he was offered Balla Fasséké as his griot to advise him during his reign, giving rise to the Kouyate line of griots. Griots often accompany their telling of oral tradition with a musical instrument, as the Epic of Sundiata is accompanied by the balafon, or as the kora accompanies other traditions. In modern times, some griots and descendants of griots have dropped their historian role and focus on music, with many finding success, however many still maintain their traditional roles. Kenya safeguarded its oral tradition by ratifying the UNESCO Convention for the Safeguarding of the Intangible Cultural Heritage in October 2007. Albanian traditions have been handed down orally across generations. They have been preserved through traditional memory systems that have survived intact into modern times in Albania, a phenomenon that is explained by the lack of state formation among Albanians and their ancestors – the Illyrians, being able to preserve their "tribally" organized society. This distinguished them from civilizations such as Ancient Egypt, Minoans and Mycenaeans, who underwent state formation and disrupted their traditional memory practices. Albanian epic poetry has been analysed by Homeric scholars to acquire a better understanding of Homeric epics. The long oral tradition that has sustained Albanian epic poetry reinforces the idea that pre-Homeric epic poetry was oral. The theory of oral-formulaic composition was developed also through the scholarly study of Albanian epic verse. The Albanian traditional singing of epic verse from memory is one of the last survivors of its kind in modern Europe, and the last survivor of the Balkan traditions. "All ancient Greek literature", states Steve Reece, "was to some degree oral in nature, and the earliest literature was completely so". Homer's epic poetry, states Michael Gagarin, "was largely composed, performed and transmitted orally". As folklores and legends were performed in front of distant audiences, the singers would substitute the names in the stories with local characters or rulers to give the stories a local flavor and thus connect with the audience, but making the historicity embedded in the oral tradition unreliable. The lack of surviving texts about the Greek and Roman religious traditions have led scholars to presume that these were ritualistic and transmitted as oral traditions, but some scholars disagree that the complex rituals in the ancient Greek and Roman civilizations were an exclusive product of an oral tradition. An Irish seanchaí (plural: seanchaithe), meaning bearer of "old lore", was a traditional Irish language storyteller (the Scottish Gaelic equivalent being the seanchaidh, anglicised as shanachie). The job of a seanchaí was to serve the head of a lineage by passing information orally from one generation to the next about Irish folklore and history, particularly in medieval times. The potential for oral transmission of history in ancient Rome is evidenced primarily by Cicero, who discusses the significance of oral tradition in works such as Brutus, Tusculan Disputations, and On The Orator. While Cicero's reliance on Cato's Origines may limit the breadth of his argument, he nonetheless highlights the importance of storytelling in preserving Roman history. Valerius Maximus also references oral tradition in Memorable Doings and Sayings (2.1.10). Wiseman argues that celebratory performances served as a vital medium for transmitting Roman history and that such traditions evolved into written forms by the third century CE. He asserts that the history of figures like the house of Tarquin was likely passed down through oral storytelling for centuries before being recorded in literature. Although Flower critiques the lack of ancient evidence supporting Wiseman's broader claims, Wiseman maintains that dramatic narratives fundamentally shaped historiography. In Asia, the transmission of folklore, mythologies as well as scriptures in ancient India, in different Indian religions, was by oral tradition, preserved with precision with the help of elaborate mnemonic techniques: According to Goody, the Vedic texts likely involved both a written and oral tradition, calling it a "parallel products of a literate society". Mostly recently, research shows that oral performance of (written) texts could be a philosophical activity in early China. It is a common knowledge in India that the primary Hindu books called Vedas are great example of Oral tradition. Pundits who memorized three Vedas were called Trivedis. Pundits who memorized four vedas were called Chaturvedis. By transferring knowledge from generation to generation Hindus protected their ancient Mantras in Vedas, which are basically Prose. The early Buddhist texts are also generally believed to be of oral tradition, with the first by comparing inconsistencies in the transmitted versions of literature from various oral societies such as the Greek, Serbia and other cultures, then noting that the Vedic literature is too consistent and vast to have been composed and transmitted orally across generations, without being written down. In the Middle East, Arabic oral tradition has significantly influenced literary and cultural practices. Arabic oral tradition encompassed various forms of expression, including metrical poetry, unrhymed prose, rhymed prose (saj'), and prosimetrum—a combination of prose and poetry often employed in historical narratives. Poetry held a position of particular importance, as it was believed to be a more reliable medium for information transmission than prose. This belief stemmed from observations that highly structured language, with its rhythmic and phonetic patterns, tended to undergo fewer alterations during oral transmission. Each genre of rhymed poetry served distinct social and cultural functions. These range from spontaneous compositions at celebrations to carefully crafted historical accounts, political commentaries, and entertainment pieces. Among these, the folk epics known as siyar (singular: sīra) were considered the most intricate. These prosimetric narratives, combining prose and verse, emerged in the early Middle Ages. While many such epics circulated historically, only one has survived as a sung oral poetic tradition: Sīrat Banī Hilāl. This epic recounts the westward migration and conquests of the Banu Hilal Bedouin tribe from the 10th to 12th centuries, culminating in their rule over parts of North Africa before their eventual defeat. The historical roots of Sīrat Banī Hilāl are evident in the present-day distribution of groups claiming descent from the tribe across North Africa and parts of the Middle East. The epic's development into a cohesive narrative was first documented by the historian Ibn Khaldūn in the 14th century. In his writings, Ibn Khaldūn describes collecting stories and poems from nomadic Arabs, using these oral sources to discuss the merits of colloquial versus classical poetry and the value of oral histories in written historical works. The Torah and other ancient Jewish literature, the Judeo-Christian Bible and texts of early centuries of Christianity are rooted in an oral tradition, and the term "People of the Book" is a medieval construct. This is evidenced, for example, by the multiple scriptural statements by Paul admitting "previously remembered tradition which he received" orally. Australian Aboriginal culture has thrived on oral traditions and oral histories passed down through thousands of years. In a study published in February 2020, new evidence showed that both Budj Bim and Tower Hill volcanoes erupted between 34,000 and 40,000 years ago. Significantly, this is a "minimum age constraint for human presence in Victoria", and also could be interpreted as evidence for the oral histories of the Gunditjmara people, an Aboriginal Australian people of south-western Victoria, which tell of volcanic eruptions being some of the oldest oral traditions in existence. A basalt stone axe found underneath volcanic ash in 1947 had already proven that humans inhabited the region before the eruption of Tower Hill. Native American society was always reliant upon oral tradition, if not storytelling, in order to convey knowledge, morals and traditions amongst others, a trait Western settlers deemed as representing an inferior race without neither culture nor history, often cited as a reason behind indoctrination. Writing systems are not known to exist among Native North Americans before contact with Europeans except among some Mesoamerican cultures, and possibly the South American quipu and North American wampum, although those two are debatable. Oral storytelling traditions flourished in a context without the use of writing to record and preserve history, scientific knowledge, and social practices. While some stories were told for amusement and leisure, most functioned as practical lessons from tribal experience applied to immediate moral, social, psychological, and environmental issues. Stories fuse fictional, supernatural, or otherwise exaggerated characters and circumstances with real emotions and morals as a means of teaching. Plots often reflect real life situations and may be aimed at particular people known by the story's audience. In this way, social pressure could be exerted without directly causing embarrassment or social exclusion. For example, rather than yelling, Inuit parents might deter their children from wandering too close to the water's edge by telling a story about a sea monster with a pouch for children within its reach. One single story could provide dozens of lessons. Stories were also used as a means to assess whether traditional cultural ideas and practices are effective in tackling contemporary circumstances or if they should be revised. Native American storytelling is a collaborative experience between storyteller and listeners. Native American tribes generally have not had professional tribal storytellers marked by social status. Stories could and can be told by anyone, with each storyteller using their own vocal inflections, word choice, content, or form. Storytellers not only draw upon their own memories, but also upon a collective or tribal memory extending beyond personal experience but nevertheless representing a shared reality. Native languages have in some cases up to twenty words to describe physical features like rain or snow and can describe the spectra of human emotion in very precise ways, allowing storytellers to offer their own personalized take on a story based on their own lived experiences. Fluidity in story deliverance allowed stories to be applied to different social circumstances according to the storyteller's objective at the time. One's rendition of a story was often considered a response to another's rendition, with plot alterations suggesting alternative ways of applying traditional ideas to present conditions. Listeners might have heard the story told many times, or even may have told the same story themselves. This does not take away from a story's meaning, as curiosity about what happens next was less of a priority than hearing fresh perspectives on well-known themes and plots. Elder storytellers generally were not concerned with discrepancies between their version of historical events and neighboring tribes' version of similar events, such as in origin stories. Tribal stories are considered valid within the tribe's own frame of reference and tribal experience. The 19th century Oglala Lakota tribal member Four Guns was known for his justification of the oral tradition and criticism of the written word. Stories are used to preserve and transmit both tribal history and environmental history, which are often closely linked. Native oral traditions in the Pacific Northwest, for example, describe natural disasters like earthquakes and tsunamis. Various cultures from Vancouver Island and Washington have stories describing a physical struggle between a Thunderbird and a Whale. One such story tells of the Thunderbird, which can create thunder by moving just a feather, piercing the Whale's flesh with its talons, causing the Whale to dive to the bottom of the ocean, bringing the Thunderbird with it. Another depicts the Thunderbird lifting the Whale from the Earth then dropping it back down. Regional similarities in themes and characters suggests that these stories mutually describe the lived experience of earthquakes and floods within tribal memory. According to one story from the Suquamish Tribe, Agate Pass was created when an earthquake expanded the channel as a result of an underwater battle between a serpent and bird. Other stories in the region depict the formation of glacial valleys and moraines and the occurrence of landslides, with stories being used in at least one case to identify and date earthquakes that occurred in 900 CE and 1700. Further examples include Arikara origin stories of emergence from an "underworld" of persistent darkness, which may represent the remembrance of life in the Arctic Circle during the last ice age, and stories involving a "deep crevice", which may refer to the Grand Canyon. Despite such examples of agreement between geological and archeological records on one hand and Native oral records on the other, some scholars have cautioned against the historical validity of oral traditions because of their susceptibility to detail alteration over time and lack of precise dates. The Native American Graves Protection and Repatriation Act considers oral traditions as a viable source of evidence for establishing the affiliation between cultural objects and Native Nations. Transmission Oral traditions face the challenge of accurate transmission and verifiability of the accurate version, particularly when the culture lacks written language or has limited access to writing tools. Oral cultures have employed various strategies that achieve this without writing. For example, a heavily rhythmic speech filled with mnemonic devices enhances memory and recall. A few useful mnemonic devices include alliteration, repetition, assonance, and proverbial sayings. In addition, the verse is often metrically composed with an exact number of syllables or morae—such as with Greek and Latin prosody and in Chandas found in Hindu and Buddhist texts. The verses of the epic or text are typically designed wherein the long and short syllables are repeated by certain rules, so that if an error or inadvertent change is made, an internal examination of the verse reveals the problem. Oral traditions can be passed on through plays and acting, as shown in modern-day Cameroon by the Graffis or Grasslanders who perform and deliver speeches to teach their history through oral tradition. Such strategies facilitate transmission of information without a written intermediate, and they can also be applied to oral governance. Rudyard Kipling's The Jungle Book provides an excellent demonstration of oral governance in the Law of the Jungle.[citation needed] Not only does grounding rules in oral proverbs allow for simple transmission and understanding, but it also legitimizes new rulings by allowing extrapolation. These stories, traditions, and proverbs are not static, but are often altered upon each transmission, barring any change to the overall meaning. In this way, the rules that govern the people are modified by the whole and not authored by a single entity. Ancient texts of Hinduism, Buddhism and Jainism were preserved and transmitted by an oral tradition. For example, the śrutis of Hinduism called the Vedas, the oldest of which trace back to the second millennium BCE. Michael Witzel explains this oral tradition as follows: The Vedic texts were orally composed and transmitted, without the use of script, in an unbroken line of transmission from teacher to student that was formalized early on. This ensured an impeccable textual transmission superior to the classical texts of other cultures; it is, in fact, something like a tape-recording... Not just the actual words, but even the long-lost musical (tonal) accent (as in old Greek or in Japanese) has been preserved up to the present. — Michael Witzel Ancient Indians developed techniques for listening, memorization and recitation of their knowledge, in schools called Gurukul, while maintaining exceptional accuracy of their knowledge across the generations. Many forms of recitation or pathas were designed to aid accuracy in recitation and the transmission of the Vedas and other knowledge texts from one generation to the next. All hymns in each Veda were recited in this way; for example, all 1,028 hymns with 10,600 verses of the Rigveda was preserved in this way; as were all other Vedas including the Principal Upanishads, as well as the Vedangas. Each text was recited in a number of ways, to ensure that the different methods of recitation acted as a cross check on the other. Pierre-Sylvain Filliozat summarizes this as: These extraordinary retention techniques guaranteed an accurate Śruti, fixed across the generations, not just in terms of unaltered word order but also in terms of sound. That these methods have been effective, is testified to by the preservation of the most ancient Indian religious text, the Ṛgveda (c. 1500 BCE). Research by Milman Parry and Albert Lord indicates that the verse of the Greek poet Homer has been passed down not by rote memorization but by "oral-formulaic composition". In this process, extempore composition is aided by use of stock phrases or "formulas" (expressions that are used regularly "under the same metrical conditions, to express a particular essential idea"). In the case of the work of Homer, formulas included eos rhododaktylos ("rosy fingered dawn") and oinops pontos ("winedark sea") which fit in a modular fashion into the poetic form (in this case six-colon Greek hexameter). Since the development of this theory, of oral-formulaic composition has been "found in many different time periods and many different cultures", and according to another source (John Miles Foley) "touch[ed] on" over 100 "ancient, medieval and modern traditions". The most recent of the world's major religions, Islam claims two major sources of divine revelation—the Quran and hadith—compiled in written form relatively shortly after being revealed: The oral milieu in which the sources were revealed, and their oral form in general are important. The Arab poetry that preceded the Quran and the hadith were orally transmitted. Few Arabs were literate at the time and paper was not available in the Middle East. The written Quran is said to have been created in part through memorization by Muhammad's companions, and the decision to create a standard written work is said to have come after the death in battle (Yamama) of a large number of Muslims who had memorized the work. For centuries, copies of the Qurans were transcribed by hand, not printed, and their scarcity and expense made reciting the Quran from memory, not reading, the predominant mode of teaching it to others. To this day the Quran is memorized by millions and its recitation can be heard throughout the Muslim world from recordings and mosque loudspeakers (during Ramadan). Muslims state that some who teach memorization/recitation of the Quran constitute the end of an "un-broken chain" whose original teacher was Muhammad himself. It has been argued that "the Qur'an's rhythmic style and eloquent expression make it easy to memorize," and was made so to facilitate the "preservation and remembrance" of the work. Islamic doctrine holds that from the time it was revealed to the present day, the Quran has not been altered,[Note 3] its continuity from divine revelation to its current written form ensured by the large numbers of Muhammad's supporters who had reverently memorized the work, a careful compiling process and divine intervention. (Muslim scholars agree that although scholars have worked hard to separate the corrupt and uncorrupted hadith, this other source of revelation is not nearly so free of corruption because of the hadith's great political and theological influence.) At least two non-Muslim scholars (Alan Dundes and Andrew G. Bannister) have examined the possibility that the Quran was not just "recited orally, but actually composed orally". Bannister postulates that some parts of the Quran—such as the seven re-tellings of the story of the Iblis and Adam, and the repeated phrases "which of the favours of your Lord will you deny?" in sura 55—make more sense addressed to listeners than readers. Banister, Dundes and other scholars (Shabbir Akhtar, Angelika Neuwirth, Islam Dayeh) have also noted the large amount of "formulaic" phraseology in the Quran consistent with "oral-formulaic composition" mentioned above. The most common formulas are the attributes of Allah—all-mighty, all-wise, all-knowing, all-high, etc.—often found as doublets at the end of a verse. Among the other repeated phrases[Note 4] are "Allah created the heavens and the earth" (found 19 times in the Quran). As much as one third of the Quran is made up of "oral formulas", according to Dundes' estimates. Bannister, using a computer database of (the original Arabic) words of the Quran and of their "grammatical role, root, number, person, gender and so forth", estimates that depending on the length of the phrase searched, somewhere between 52% (three word phrases) and 23% (five word phrases) are oral formulas. Dundes reckons his estimates confirm "that the Quran was orally transmitted from its very beginnings". Bannister believes his estimates "provide strong corroborative evidence that oral composition should be seriously considered as we reflect upon how the Qur'anic text was generated." Dundes argues oral-formulaic composition is consistent with "the cultural context of Arabic oral tradition", quoting researchers who have found poetry reciters in the Najd (the region next to where the Quran was revealed) using "a common store of themes, motives, stock images, phraseology and prosodical options", and "a discursive and loosely structured" style "with no fixed beginning or end" and "no established sequence in which the episodes must follow".[Note 5] The Catholic Church upholds that its teaching contained in its deposit of faith is transmitted not only through scripture, but as well as through sacred tradition. The Second Vatican Council affirmed in Dei verbum that the teachings of Jesus Christ were initially passed on to early Christians by "the Apostles who, by their oral preaching, by example, and by observance handed on what they had received from the lips of Christ, from living with Him, and from what He did". The Catholic Church asserts that this mode of transmission of the faith persists through current-day bishops, who by right of apostolic succession, have continued the oral passing of what had been revealed through Christ through their preaching as teachers. In Eastern Orthodoxy, there is one Tradition, the tradition of the church, incorporating the scriptures and the teaching of the Church Fathers. Sacred Tradition for the Eastern Orthodox is the deposit of faith given by Jesus to the Apostles and passed on in the Church from one generation to the next without addition, alteration, or subtraction. Music Study In the work of the Serb scholar Vuk Stefanović Karadžić (1787–1864), a contemporary and friend of the Brothers Grimm. Vuk pursued similar projects of "salvage folklore" (similar to rescue archaeology) in the cognate traditions of the South Slavic regions which would later be gathered into Yugoslavia, and with the same admixture of romantic and nationalistic interests (he considered all those speaking the Eastern Herzegovinian dialect as Serbs). Somewhat later, but as part of the same scholarly enterprise of nationalist studies in folklore, the turcologist Vasily Radlov (1837–1918) would study the songs of the Kara-Kirghiz in what would later become the Soviet Union; Karadzic and Radloff would provide models for the work of Parry. In a separate development, the media theorist Marshall McLuhan (1911–1980) would begin to focus attention on the ways that communicative media shape the nature of the content conveyed. He would serve as mentor to the Jesuit Walter Ong (1912–2003), whose interests in cultural history, psychology and rhetoric would result in Orality and Literacy (Methuen, 1980) and the important but less-known Fighting for Life: Contest, Sexuality and Consciousness (Cornell, 1981). These two works articulated the contrasts between cultures defined by primary orality, writing, print, and the secondary orality of the electronic age. Ong's works also made possible an integrated theory of oral tradition which accounted for both production of content (the chief concern of Parry-Lord theory) and its reception. This approach, like McLuhan's, kept the field open not just to the study of aesthetic culture but to the way physical and behavioral artifacts of oral societies are used to store, manage and transmit knowledge, so that oral tradition provides methods for investigation of cultural differences, other than the purely verbal, between oral and literate societies. The most-often studied section of Orality and Literacy concerns the "psychodynamics of orality" This chapter seeks to define the fundamental characteristics of 'primary' orality and summarizes a series of descriptors (including but not limited to verbal aspects of culture) which might be used to index the relative orality or literacy of a given text or society. In advance of Ong's synthesis, John Miles Foley began a series of papers based on his own fieldwork on South Slavic oral genres, emphasizing the dynamics of performers and audiences. Foley effectively consolidated oral tradition as an academic field when he compiled Oral-Formulaic Theory and Research in 1985. The bibliography gives a summary of the progress scholars made in evaluating the oral tradition up to that point, and includes a list of all relevant scholarly articles relating to the theory of Oral-Formulaic Composition. He also both established both the journal Oral Tradition and founded the Center for Studies in Oral Tradition (1986) at the University of Missouri. Foley developed Oral Theory beyond the somewhat mechanistic notions presented in earlier versions of Oral-Formulaic Theory, by extending Ong's interest in cultural features of oral societies beyond the verbal, by drawing attention to the agency of the bard and by describing how oral traditions bear meaning. The bibliography would establish a clear underlying methodology which accounted for the findings of scholars working in the separate Linguistics fields (primarily Ancient Greek, Anglo-Saxon and Serbo-Croatian). Perhaps more importantly, it would stimulate conversation among these specialties, so that a network of independent but allied investigations and investigators could be established. Foley's key works include The Theory of Oral Composition (1988); Immanent Art (1991); Traditional Oral Epic: The Odyssey, Beowulf and the Serbo-Croatian Return-Song (1993); The Singer of Tales in Performance (1995); Teaching Oral Traditions (1998); How to Read an Oral Poem (2002). His Pathways Project (2005–2012) draws parallels between the media dynamics of oral traditions and the Internet. The theory of oral tradition would undergo elaboration and development as it grew in acceptance. While the number of formulas documented for various traditions proliferated, the concept of the formula remained lexically bound. However, numerous innovations appeared, such as the "formulaic system"[Note 6] with structural "substitution slots" for syntactic, morphological and narrative necessity (as well as for artistic invention). Sophisticated models such as Foley's "word-type placement rules" followed. Higher levels of formulaic composition were defined over the years, such as "ring composition", "responsion" and the "type-scene" (also called a "theme" or "typical scene"). Examples include the "Beasts of Battle" and the "Cliffs of Death". Some of these characteristic patterns of narrative details, (like "the arming sequence;" "the hero on the beach"; "the traveler recognizes his goal") would show evidence of global distribution. At the same time, the fairly rigid division between oral and literate was replaced by recognition of transitional and compartmentalized texts and societies, including models of diglossia (Brian Stock Franz Bäuml, and Eric Havelock). Perhaps most importantly, the terms and concepts of "orality" and "literacy" came to be replaced with the more useful and apt "traditionality" and "textuality". Very large units would be defined (The Indo-European Return Song) and areas outside of military epic would come under investigation: women's song, riddles and other genres. The methodology of oral tradition now conditions a large variety of studies, not only in folklore, literature and literacy, but in philosophy, communication theory, Semiotics, and including a very broad and continually expanding variety of languages and ethnic groups, and perhaps most conspicuously in biblical studies, in which Werner Kelber has been especially prominent. The annual bibliography is indexed by 100 areas, most of which are ethnolinguistic divisions. Present developments explore the implications of the theory for rhetoric and composition, interpersonal communication, cross-cultural communication, postcolonial studies, rural community development, popular culture and film studies and many other areas. The most significant areas of theoretical development at present may be the construction of systematic hermeneutics and aesthetics The theory of oral tradition encountered early resistance from scholars who perceived it as potentially supporting either one side or another in the controversy between what were known as "unitarians" and "analysts"—that is, scholars who believed Homer to have been a single, historical figure, and those who saw him as a conceptual "author function", a convenient name to assign to what was essentially a repertoire of traditional narrative. A much more general dismissal of the theory and its implications simply described it as "unprovable" Some scholars, mainly outside the field of oral tradition, represent (either dismissively or with approval) this body of theoretical work as reducing the great epics to children's party games like "telephone" or "Chinese whispers". While games provide amusement by showing how messages distort content via uncontextualized transmission, Parry's supporters argue that the theory of oral tradition reveals how oral methods optimized the signal-to-noise ratio and thus improved the quality, stability and integrity of content transmission. There were disputes concerning particular findings of the theory. For example, those trying to support or refute Crowne's hypothesis found the "Hero on the Beach" formula in numerous Old English poems. Similarly, it was also discovered in other works of Germanic origin, Middle English poetry, and even an Icelandic prose saga. J.A. Dane, in an article characterized as "polemics without rigor" claimed that the appearance of the theme in Ancient Greek poetry, a tradition without known connection to the Germanic, invalidated the notion of "an autonomous theme in the baggage of an oral poet". Within Homeric studies specifically, Lord's The Singer of Tales, which focused on problems and questions that arise in conjunction with applying oral-formulaic theory to problematic texts such as the Iliad, Odyssey, and even Beowulf, influenced nearly all of the articles written on Homer and oral-formulaic composition thereafter. However, in response to Lord, Geoffrey Kirk published The Songs of Homer, questioning Lord's extension of the oral-formulaic nature of Serbian and Croatian literature (the area from which the theory was first developed) to Homeric epic. Kirk argues that Homeric poems differ from those traditions in their "metrical strictness", "formular system[s]", and creativity. In other words, Kirk argued that Homeric poems were recited under a system that gave the reciter much more freedom to choose words and passages to get to the same end than the Serbo-Croatian poet, who was merely "reproductive". Shortly thereafter, Eric Havelock's Preface to Plato revolutionized how scholars looked at Homeric epic by arguing not only that it was the product of an oral tradition, but also that the oral-formulas contained therein served as a way for ancient Greeks to preserve cultural knowledge across many different generations. Adam Parry, in his 1966 work "Have we Homer's Iliad?", theorized the existence of the most fully developed oral poet to his time, a person who could (at his discretion) creatively and intellectually create nuanced characters in the context of the accepted, traditional story. In fact, he discounted the Serbo-Croatian tradition to an "unfortunate" extent, choosing to elevate the Greek model of oral-tradition above all others. Lord reacted to Kirk's and Parry's essays with "Homer as Oral Poet", published in 1968, which reaffirmed Lord's belief in the relevance of Yugoslav poetry and its similarities to Homer and downplayed the intellectual and literary role of the reciters of Homeric epic. Many of the criticisms of the theory have been absorbed into the evolving field as useful refinements and modifications. For example, in what Foley called a "pivotal" contribution, Larry Benson introduced the concept of "written-formulaic" to describe the status of some Anglo-Saxon poetry which, while demonstrably written, contains evidence of oral influences, including heavy reliance on formulas and themes A number of individual scholars in many areas continue to have misgivings about the applicability of the theory or the aptness of the South Slavic comparison, and particularly what they regard as its implications for the creativity which may legitimately be attributed to the individual artist. However, at present, there seems to be little systematic or theoretically coordinated challenge to the fundamental tenets of the theory; as Foley put it, ""there have been numerous suggestions for revisions or modifications of the theory, but the majority of controversies have generated further understanding. Historiography The development of African historiography in the mid to late 20th century saw a movement towards utilising oral sources alongside auxiliary disciplines, due to the paucity of written sources. Oral traditions differ from written texts in that they are more directly subject to the sensory experience of the listener(s).: 202 In 1961, Jan Vansina published Oral tradition in which he made the case for the validity of oral sources as historical sources, and it is regarded as one of the most influential works written about African history and oral tradition.: 171–172 Oral traditions have been utilised in the reconstruction of various Indigenous peoples' histories, including Maori's, Native Americans', and Polynesians'.: 1–15 Historians collect and transcribe oral traditions via fieldwork, a practice that was initially foreign to historians who would usually spend most of their time sifting through archives and libraries.[b] Unfortunately, in Africa most of the early tapes and transcriptions weren't submitted to public depositories, gravely impacting verifiability and future critique of interpretation. Researchers tend not to be fluent in the local language, and employ interpreters to translate questions and answers, harming the communication of meanings and understanding.: 170, 173, 177 Individualised interviews tend to be preferred because in group performances, which consist of the narrator and audience sharing and shaping the story, improvisation to entertain may be prioritised over accuracy of the tale. Occasionally, traditions are influenced by written works or incorporate recently acquired information, called feedback.: 180, 183 As oral tradition rarely incorporates chronological devices, lists of rulers have been crucial to establishing dates and chronologies. This is done via generational averaging, with the most common length chosen for generations being 27 years. In some cases, a ruler or event is mentioned in contemporary written sources of whose dates are known. Some lists have been known to grow over time, harming their credibility.: 186 Barbara Cooper emphasises the creativity of the oral poet, and criticises the formulaic approach saying that the meaning sits in the performance, not necessarily captured through analysis of a transcription or interpretation of the words. Karin Barber said that oral traditions enact struggles and power, not only of historical individuals but also of the oral poet in that the oral 'text' only exists for the speaker and listeners.: 200–201 Jan Bender Shetler wrote that oral historians "reconstruct (rather than reproduce) oral traditions through the use of mnemonic systems, the central elements of which scholars of oral tradition call core images or clichés", and that the core images are the key to historical interpretation. There have been various academic debates in African historiography surrounding oral tradition. The first in the 1960s involved Jan Vansina and his students developing a rigorous approach to recover the past from oral traditions, counteracting scepticism and outright dismissal of the concept of African history. This was successful, despite not engaging and cooperating with African-American movements around oral history. The second focussed on the argument that oral traditions consisted of faithful memories of past events, which faced criticism from functionalists who argued that oral traditions function to reinforce present-day realities and give relatively little information about the past (called the "presentist critique"), and structuralists who emphasised the mythological and symbolic elements of oral tradition (called the "cosmological critique"). The cosmological critique was answered by Joseph Miller's The African Past Speaks (1980), in which historians emphasised the need to pay attention to how cultural understanding, political struggle, and memory shape traditions, and explore and analyse discrepancies between traditions which tend to signal problems, shifts, struggles, and loud silences. On the other hand, the presentist critique has proved pertinent and has been harder to dismiss.: 193–197 A folklorist critique of Africanist historians emphasised the role of the individual traditional oral historian in the crafting and preservation of oral traditions (and the possibility of infusion of autobiographical or experiential information, necessitating inquiry about the storyteller's life), rather than Africanists' focus on the influence of institutions, and the importance of an emic (insider) approach, rather than an etic (outsider) approach where the traditions are transcribed and interpreted from an outsider/European perspective. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-176] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-392] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Federico_Faggin] | [TOKENS: 3446]
Contents Federico Faggin Federico Faggin (Italian pronunciation: [fedeˈriːko fadˈdʒin], Venetian: [faˈdʒiŋ]; born 1 December 1941) is an Italian-American physicist, engineer, inventor and entrepreneur. He is best known for designing the first commercial microprocessor, the Intel 4004. He led the 4004 (MCS-4) project and the design group during the first five years of Intel's microprocessor effort. Faggin also created, while working at Fairchild Semiconductor in 1968, the self-aligned MOS (metal–oxide–semiconductor) silicon-gate technology (SGT), which made possible MOS semiconductor memory chips, CCD image sensors, and the microprocessor. After the 4004, he led development of the Intel 8008 and 8080, using his SGT methodology for random logic chip design, which was essential to the creation of early Intel microprocessors. He was co-founder (with Ralph Ungermann) and CEO of Zilog, the first company solely dedicated to microprocessors, and led the development of the Zilog Z80 and Z8 processors. He was later the co-founder and CEO of Cygnet Technologies, and then Synaptics. In 2010, he received the 2009 National Medal of Technology and Innovation, the highest honor the United States confers for achievements related to technological progress. In 2011, Faggin founded the Federico and Elvia Faggin Foundation to support the scientific study of consciousness at US universities and research institutes. In 2015, the Faggin Foundation helped to establish a $1 million endowment for the Faggin Family Presidential Chair in the Physics of Information at UC Santa Cruz to promote the study of "fundamental questions at the interface of physics and related fields including mathematics, complex systems, biophysics, and cognitive science, with the unifying theme of information in physics." Education and early career Born in Vicenza, Italy, Federico grew up in an intellectual environment. His father, Giuseppe Faggin, was a scholar who wrote many academic books and translated, with commentaries, the Enneads of Plotinus from the original Greek into modern Italian. Federico had a strong interest in technology from an early age. He attended a technical high school in Vicenza, I.T.I.S. Alessandro Rossi, and later earned a laurea degree in physics, summa cum laude, from the University of Padua. Faggin joined Olivetti at the age of 19. There he co-designed and led the implementation of a small digital transistor computer with 4 K × 12 bit of magnetic memory (1960). The Olivetti R&D department subsequently developed one of the world's first programmable desktop electronic calculators, the Olivetti Programma 101 (1964). After this first work experience, Faggin studied physics at the University of Padua and taught the electronics laboratory course for 3rd year physics students in the academic year 1965–1966. In 1967, Faggin joined SGS-Fairchild, an Italy-based joint venture between the Italian company Società Generale Semiconduttori and the American firm Fairchild Semiconductor. There, he pioneered the MOS metal-gate process technology and designed the first two commercial MOS integrated circuits. Impressed by his achievements, the company transferred Faggin to Fairchild's Palo Alto, California offices in February 1968. When Fairchild exited the joint venture, he accepted a job offer to stay on with Fairchild. Silicon Valley career In Palo Alto, Faggin led the development of silicon-gate technology (SGT), designing its unique process architecture. SGT, a MOSFET with a silicon self-aligned gate, became one of the most transformative advancements in microelectronics. SGT laid the foundation for all modern NMOS and CMOS integrated circuits. It enabled key innovations, including MOS semiconductor memory chips, the first microprocessor, and the first CCD and EPROM with floating silicon gates. Replacing the earlier metal-gate MOS technology, SGT was rapidly adopted worldwide, and within a decade, it rendered bipolar transistors-based integrated circuits obsolete. At Fairchild, Faggin designed the first commercial integrated circuit using silicon-gate technology with self-aligned MOSFET transistors: the Fairchild 3708. The 3708 was an 8-bit analog multiplexer with decoding logic, replacing the equivalent Fairchild 3705 that used metal-gate technology. The 3708 was 5 times faster, had 100 times less junction leakage and was much more reliable than the 3705, demonstrating the superiority of SGT over metal-gate MOS. See also: Faggin, F., Klein T. (1969). "A Faster Generation of MOS Devices With Low Threshold Is Riding The Crest of the New Wave, Silicon-Gate IC's". Electronics, 29 Sep. 1969. Federico Faggin joined Intel from Fairchild in 1970 as the project leader and designer of the MCS-4 family of microprocessors, which included the 4004, the world's first single-chip microprocessor. Fairchild was not taking advantage of the SGT and Faggin wanted to use his new technology to design advanced chips. The 4004 (1971) was made possible by the advanced capabilities of the silicon gate technology (SGT) being enhanced through the novel random logic chip design methodology that Faggin created at Intel. It was this new methodology, together with his several design innovations, that allowed him to fit the microprocessor in one small chip. A single-chip microprocessor – an idea that was expected to occur many years in the future – became possible in 1971 by using SGT with two additional innovations: (1) "buried contacts" that doubled the circuit density, and (2) the use of bootstrap loads with 2-phase clocks—previously considered impossible with SGT— that improved the speed 5 times, while reducing the chip area by half compared with metal-gate MOS. The design methodology created by Faggin was utilized for the implementation of all Intel's early microprocessors and later also for Zilog's Z80. The Intel 4004 – a 4-bit CPU (central processing unit) on a single chip – was a member of a family of 4 custom chips designed for Busicom, a Japanese calculator manufacturer. The other members of the family (constituting the MCS-4 family) were: the 4001, a 2k-bit metal-mask programmable ROM with programmable input-output lines; the 4002, a 320-bit dynamic RAM with a 4-bit output port; and the 4003, a 10-bit serial input and serial/parallel output, static shift register to use as an I/O expander. Faggin promoted the idea of broadly marketing the MCS-4 to customers other than Busicom by showing Intel management how customers could design a control system using the 4004. He designed and built a 4004 tester using the 4004 as the controller of the tester, thus convincing Bob Noyce to renegotiate the exclusivity clause with Busicom that didn't allow Intel to sell the MCS-4 line to other customers. In 2009, the four contributors to the 4004 were inducted as Fellows of the Computer History Museum. Ted Hoff, head of Application Research Department, formulated the architectural proposal and the instruction set with assistance from Stan Mazor and working in conjunction with Busicom's Masatoshi Shima. However, none of them was a chip designer and none was familiar with the new Silicon Gate Technology (SGT). The silicon design was the essential missing ingredient to making a microprocessor since everything else was already known. Federico Faggin led the project in a different department without Hoff's and Mazor's involvement. Faggin had invented the original SGT at Fairchild Semiconductor in 1968 and provided additional refinements and inventions to make possible the implementation of the 4004 in a single chip. With routine help from Shima, Faggin completed the chip design in January 1971. The Intel 2102A is a redesign of the Intel 2102 static RAM, where Federico Faggin introduced to Intel, for the first time, the depletion load, combining the silicon gate technology with ionic implantation. The design was done toward the end of 1973 by Federico Faggin and Dick Pashley. The 2102A was 5 times faster than the 2102, opening a new direction for Intel. Faggin's silicon design methodology was used for implementing all Intel's early microprocessors. The Intel 8008 was the world's first single-chip 8-bit CPU and, like the 4004, was built with p-channel SGT. The 8008 development was originally assigned to Hal Feeney in March 1970 but was suspended until the 4004 was completed. It was resumed in January 1971 under Faggin's direction utilizing the basic circuits and methodology he had developed for the 4004, with Hal Feeney doing the chip design. The CPU architecture of the 8008 was originally created by CTC Inc. for the Datapoint 2200 intelligent terminal, in which it was implemented in discrete IC logic. The Intel 4040 microprocessor (1974) was a much improved, machine-code-compatible version of the 4004 CPU allowing it to interface directly with standard memories and I/O devices. Federico Faggin created the architecture of the 4040 and supervised Tom Innes who did the design work. The 8080 microprocessor (1974) was the first high-performance 8-bit microprocessor in the market, using the faster n-channel SGT. The 8080 was conceived and architected by Faggin, and designed by Masatoshi Shima under Faggin's supervision. The 8080 was a major improvement over the 8008 architecture, yet it retained software compatibility with it. It was much faster and easier to interface to external memory and I/O devices than the 8008. The high performance and low cost of the 8080 let developers use microprocessors for many new applications, including the forerunners of the personal computer. When Faggin left Intel at the end of 1974 to found Zilog with Ralph Ungermann, he was R&D department manager responsible for all MOS products, except for dynamic memories. The Zilog Z80 was the first microprocessor created by Zilog, the first company entirely dedicated to microprocessors. It was started by Federico Faggin and Ralph Ungermann in November 1974. Faggin was Zilog's president and CEO until the end of 1980 and he conceived and designed the Z80 CPU and its family of programmable peripheral components. He also co-designed the CPU whose project leader was Masatoshi Shima. The Z80-CPU was a major improvement over the 8080, yet it retained software compatibility with it. Much faster and with more than twice as many registers and instructions of the 8080, it was part of a family of components that included several intelligent peripherals (the Z80-PIO, a programmable parallel input-output controller; the Z80-CTC, a programmable counter-timer; the Z80-SIO, programmable serial communications interface controller, and the Z80-DMA, programmable direct memory access controller). This chip family allowed the design of powerful and low-cost microcomputers with performance comparable to minicomputers. The Z80-CPU had a substantially better bus structure and interrupt structure than the 8080 and could interface directly with dynamic RAM, since it included an internal memory-refresh controller. The Z80 was used in many of early personal computers, as well as in video game systems such as the MSX, ColecoVision, Master System. The Z80 ceased production in 2024. The Zilog Z8 micro controller (1978) was one of the first single-chip microcontrollers in the market. It integrated an 8-bit CPU, RAM, ROM and I/O facilities, sufficient for many control applications. Faggin conceived the Z8 in 1974, soon after he founded Zilog, but then decided to give priority to the Z80. The Z8 was designed in 1976–78 and ended production in 2024. The Communication CoSystem (1984). The Cosystem was conceived by Faggin and designed and produced by Cygnet Technologies, Inc., the second startup company of Faggin. Attached to a personal computer and to a standard phone line, the CoSystem could automatically handle all the personal voice and data communications of the user, including electronic mail, database access, computer screen transfers during a voice communication, call record keeping, etc. The patent covering the CoSystem is highly cited in the personal communication field. In 1986 Faggin co-founded and was CEO of Synaptics until 1999, becoming chairman from 1999 to 2009. Synaptics was initially dedicated to R&D in artificial neural networks for pattern-recognition applications using analog VLSI. Synaptics introduced the I1000, the world's first single-chip optical character recognizer in 1991. In 1994, Synaptics introduced the touchpad to replace the cumbersome trackball then in use in laptop computers. The touchpad was broadly adopted by the industry. Synaptics also introduced the early touchscreens that were eventually adopted for intelligent phones and tablets; applications that now dominate the market. Faggin came up with the general product idea and led a group of engineers who further refined the idea through many brainstorming sessions. Faggin is a co-inventor of ten patents assigned to Synaptics. He is chairman emeritus of Synaptics. During his tenure as president and CEO of Foveon, from 2003 to 2008, Faggin revitalized the company and provided a new technological and business direction resulting in image sensors superior in all critical parameters to the best sensors of the competition, while using approximately half the chip size of competing devices.[citation needed] Faggin also oversaw the successful acquisition of Foveon by the Japanese Sigma Corporation in November 2008. Founded in 2011 the "Federico and Elvia Faggin Foundation" supports the scientific study of consciousness at US universities and research institutes. The purpose of the Foundation is to advance the understanding of consciousness through theoretical and experimental research. Faggin's interest in consciousness has his roots in the study of artificial neural networks at Synaptics, a company he started in 1986, that prompted his inquiry into whether or not it is possible to build a conscious computer. The theory of consciousness In the book Irreducible - Consciousness, life, computers, and human nature (Essentia Books, 2024), Federico Faggin proposed a theory of consciousness according to which consciousness is a purely quantum phenomenon, unique to each of us. This theory is supported by two quantum physics theorems: the no-cloning theorem and Holevo's theorem. The first states that a pure quantum state is not reproducible; the second limits the amount of measurable information to one classical bit for each qubit that describes the state. Therefore, it is possible to postulate that a quantum system that is in a pure state is aware of its state, since conscious experiences (qualia) have all the essential properties of pure states, i.e., it is private knowledge only minimally knowable from the outside. However, the mathematical representation of the experience (the pure state) does not describe the experience, which remains private and knowable only from within by the system that is in that state. No classical machine can ever be conscious given that classical information is reproducible (program and data can be copied perfectly), while the quantum state is private. Consciousness is therefore not linked to the functioning of the body and can continue to exist even after the death of the body. The body behaves like a drone controlled "top down" by consciousness. The new D'Ariano-Faggin theory is based on the theoretical studies of Professor Giacomo D'Ariano, who derived quantum theory from principles based on information theory, and on the experiential, philosophical and scientific studies of Federico Faggin on the nature of consciousness. Original documents Publications Awards Source for the above-mentioned awards: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Folk_etymology] | [TOKENS: 2759]
Contents Folk etymology Folk etymology[a] is a change in a word or phrase resulting from the replacement of an unfamiliar form by a more familiar one through popular usage. The form or the meaning of an archaic, foreign, or otherwise unfamiliar word is reinterpreted as resembling more familiar words or morphemes. The term folk etymology is a loan translation from German Volksetymologie, coined by Ernst Förstemann in 1852. Folk etymology is a productive process in historical linguistics, language change, and social interaction. Reanalysis of a word's history or original form can affect its spelling, pronunciation, or meaning. This is frequently seen in relation to loanwords or words that have become archaic or obsolete. Folk/popular etymology may also refer to a popular false belief about the etymology of a word or phrase that does not lead to a change in the form or meaning. To disambiguate the usage of the term "folk/popular etymology", Ghil'ad Zuckermann proposes a clear-cut distinction between the derivational-only popular etymology (DOPE) and the generative popular etymology (GPE): the DOPE refers to a popular false etymology involving no neologization, and the GPE refers to neologization generated by a popular false etymology. Examples of words created or changed through folk etymology include the English dialectal form sparrowgrass, originally from Greek ἀσπάραγος ("asparagus") remade by analogy to the more familiar words sparrow and grass. When the alteration of an unfamiliar word or phrase is spontaneously performed by an individual, it is known as an eggcorn. Productive force The technical term "folk etymology" refers to a change in the form of a word caused by erroneous popular suppositions about its etymology. Until the academic development of comparative linguistics and description of laws underlying sound changes, the derivation of a word was mostly guess-work. Speculation about the original form of words in turn feeds back into the development of the word and thus becomes a part of a new etymology. Believing a word to have a certain origin, people begin to pronounce, spell, or otherwise use the word in a manner appropriate to that perceived origin. This popular etymologizing has had a powerful influence on the forms which words take. Examples in English include crayfish or crawfish, which are not historically related to fish but come from Middle English crevis, cognate with French écrevisse. Likewise chaise lounge, from the original French chaise longue ("long chair"), has come to be associated with the word lounge. Related phenomena Other types of language change caused by reanalysis of the structure of a word include rebracketing and back-formation. In rebracketing, users of the language change, misinterpret, or reinterpret the location of a boundary between words or morphemes. For example, the Old French word orenge 'orange tree' comes from Arabic النَّرَنْج an-naranj 'the orange tree', with the initial ⟨n⟩ of naranj understood as part of the article. Rebracketing in the opposite direction saw the Middle English a napron and a nadder become an apron and an adder. In back-formation, a new word is created by removing elements from an existing word that are interpreted as affixes. For example, Italian pronuncia 'pronunciation, accent' is derived from the verb pronunciare 'to pronounce, to utter' and English edit derives from editor. Some cases of back-formation are based on folk etymology. Examples in English In linguistic change caused by folk etymology, the form of a word changes so that it better matches its popular rationalisation. Typically this happens either to unanalysable foreign words or to compounds where the word underlying one part of the compound becomes obsolete. There are many examples of words borrowed from foreign languages, and subsequently changed by folk etymology. The spelling of many borrowed words reflects folk etymology. For example, andiron borrowed from Old French was variously spelled aundyre or aundiren in Middle English, but was altered by association with iron. Other Old French loans altered in a similar manner include belfry (from berfrey) by association with bell, female (from femelle) by male, and penthouse (from apentis) by house. The variant spelling of licorice as liquorice comes from the supposition that it has something to do with liquid. Anglo-Norman licoris (influenced by licor 'liquor') and Late Latin liquirītia were respelled for similar reasons, though the ultimate origin of all three is Ancient Greek γλυκύρριζα glucúrrhiza 'sweet root'. Reanalysis of loan words can affect their spelling, pronunciation, or meaning. The word cockroach, for example, was borrowed from Spanish cucaracha but was assimilated to the existing English words cock and roach. The phrase forlorn hope originally meant "storming party, body of skirmishers" from Dutch verloren hoop "lost troop". But confusion with English hope has given the term an additional meaning of "hopeless venture". Sometimes imaginative stories are created to account for the link between a borrowed word and its popularly assumed sources. The names of the serviceberry, service tree, and related plants, for instance, come from the Latin name sorbus. The plants were called syrfe in Old English, which eventually became service. Fanciful stories suggest that the name comes from the fact that the trees bloom in spring, a time when circuit-riding preachers resume church services or when funeral services are carried out for people who died during the winter. A seemingly plausible but no less speculative etymology accounts for the form of Welsh rarebit, a dish made of cheese and toasted bread. The earliest known reference to the dish in 1725 called it Welsh rabbit. The origin of that name is unknown, but presumably humorous, since the dish contains no rabbit. In 1785 Francis Grose suggested in A Classical Dictionary of the Vulgar Tongue that the dish is "a Welch rare bit", though the word rarebit was not common prior to Grose's dictionary. Both versions of the name are in current use; individuals sometimes express strong opinions concerning which version is correct. When a word or other form becomes obsolete, words or phrases containing the obsolete portion may be reanalyzed and changed. Some compound words from Old English were reanalyzed in Middle or Modern English when one of the constituent words fell out of use. Examples include bridegroom from Old English brydguma 'bride-man'. The word gome 'man' from Old English guma fell out of use during the sixteenth century and the compound was eventually reanalyzed with the Modern English word groom 'male servant'. A similar reanalysis caused sandblind, from unattested Old English *sāmblind 'half-blind' with a once-common prefix sām- 'semi-', to be respelled as though it is related to sand. The word island derives from Old English igland. The modern spelling with the letter s is the result of comparison with the synonym isle from Old French and ultimately as a Latinist borrowing of insula, though the Old French and Old English words are not historically related. In a similar way, the spelling of wormwood was likely affected by comparison with wood.: 449 The phrase curry favour, meaning to flatter, comes from Middle English curry favel 'groom a chestnut horse'. This was an allusion to a fourteenth-century French morality poem, Roman de Fauvel, about a chestnut-coloured horse who corrupts men through duplicity. The phrase was reanalyzed in early Modern English by comparison to favour as early as 1510. Words need not completely disappear before their compounds are reanalyzed. The word shamefaced was originally shamefast. The original meaning of fast 'fixed in place' still exists, as in the compounded words steadfast and colorfast, but by itself mainly in frozen expressions such as stuck fast, hold fast, and play fast and loose.[citation needed] The songbird wheatear or white-ear is a back-formation from Middle English whit-ers 'white arse', referring to the prominent white rump found in most species. Although both white and arse are common in Modern English, the folk etymology may be euphemism. Reanalysis of archaic or obsolete forms can lead to changes in meaning as well. The original meaning of hangnail referred to a corn on the foot. The word comes from Old English ang- + nægel 'anguished nail, compressed spike', but the spelling and pronunciation were affected by folk etymology in the seventeenth century or earlier. Thereafter, the word came to be used for a tag of skin or torn cuticle near a fingernail or toenail. Other languages Several words in Medieval Latin were subject to folk etymology. For example, the word widerdonum meaning 'reward' was borrowed from Old High German widarlōn 'repayment of a loan'. The l → d alteration is due to confusion with Latin donum 'gift'.: 157 Similarly, the word baceler or bacheler (related to modern English bachelor) referred to a junior knight. It is attested from the eleventh century, though its ultimate origin is uncertain. By the late Middle Ages its meaning was extended to the holder of a university degree inferior to master or doctor. This was later re-spelled baccalaureus, probably reflecting a false derivation from bacca laurea 'laurel berry', alluding to the possible laurel crown of a poet or conqueror.: 17–18 Likewise in Greek myth, many religious terms are folk-etymologised to suit common vocabulary. In Plato’s dialogue Cratylus, the name of Zeus is folk-etymologised to connect it to Zoe (the word for "life" as a phenomenon; compare the doublet bios referring to a qualified life or lifespan, both of which are cognate to English "quick"), giving it the meaning "cause of life always to all things", because of puns between alternate titles of Zeus (Zen and Dia) with the Greek words for life and "because of"; in reality, his name is a reflex of *Dyēus, an PIE root meaning "bright/shining one". Diodorus Siculus wrote that Zeus was also called Zen, because the humans believed that he was the cause of life. Meanwhile, Lactantius wrote that he was called Zeus and Zen not because he is the giver of life, but because he was the first who lived of the children of Cronus, therefore making the meaning of his name "the one who lived". The name of Orion, likewise, is folk-etymologised as a polite alteration of "Urion", referring to his conception through the gods urinating on his mother's ashes; his name is speculated today to have been borrowed from Akkadian Uru-annak, meaning "Heaven's light". In the fourteenth or fifteenth century, French scholars began to spell the verb savoir 'to know' as sçavoir on the false belief it was derived from Latin scire 'to know'. In fact it comes from sapere 'to be wise'. The Italian word liocorno, meaning 'unicorn' derives from 13th-century lunicorno (lo 'the' + unicorno 'unicorn'). Folk etymology based on lione 'lion' altered the spelling and pronunciation. Dialectal liofante 'elephant' was likewise altered from elefante by association with lione.: 486 The Dutch word for 'hammock' is hangmat. It was borrowed from Spanish hamaca (ultimately from Arawak amàca) and altered by comparison with hangen and mat 'hanging mat'. German Hängematte shares this folk etymology. Islambol, a folk etymology meaning 'Islam abounding', is one of the names of Istanbul used after the Ottoman conquest of 1453. An example from Persian is the word شطرنج shatranj 'chess', which is derived from the Sanskrit चतुरङ्ग chatur-anga ("four-army [game]"; 2nd century BCE), and after losing the u to syncope, became چترنگ chatrang in Middle Persian (6th century CE). Today it is sometimes factorized as sad 'hundred' + ranj 'worry, mood', or 'a hundred worries'. Some Indonesian feminists discourage usage of the term wanita ('woman') and replacing it with perempuan, since wanita itself has misogynistic roots. First, in Javanese, wanita is a portmanteau of wani ditata (dare to be controlled), also, wanita is taken from Sanskrit वनिता vanitā (someone desired by men). In Turkey, the political Democrat Party changed its logo in 2007 to a white horse in front of a red background because many voters folk-etymologized its Turkish name Demokrat as demir kırat 'iron white-horse'. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEMcDonnell1995-153] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-393] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-177] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-110] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-112] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Multiplayer_video_game] | [TOKENS: 2896]
Contents Multiplayer video game A multiplayer video game is a video game in which more than one person can play in the same game environment at the same time, either locally on the same computing system (couch co-op), on different computing systems via a local area network, or via a wide area network, most commonly the Internet . Multiplayer games usually require players to share a single game system or use networking technology to play together over a greater distance; players may compete against one or more human contestants, work cooperatively with a human partner to achieve a common goal, or supervise other players' activity. Due to multiplayer games allowing players to interact with other individuals, they provide an element of social communication absent from single-player games. The history of multiplayer video games extends over several decades, tracing back to the emergence of electronic gaming in the mid-20th century. One of the earliest instances of multiplayer interaction was witnessed with the development of Spacewar! in 1962 for the DEC PDP-1 computer by Steve Russell and colleagues at the MIT. During the late 1970s and early 1980s, multiplayer gaming gained momentum within the arcade scene with classics like Pong and Tank. The transition to home gaming consoles in the 1980s further popularized multiplayer gaming. Titles like Super Mario Bros. for the NES and Golden Axe for the Sega Genesis introduced cooperative and competitive gameplay. Additionally, LAN gaming emerged in the late 1980s, enabling players to connect multiple computers for multiplayer gameplay, popularized by titles like Doom and Warcraft: Orcs & Humans. Players can also play together in the same room using splitscreen. Non-networked Some of the earliest video games were two-player games, including early sports games (such as 1958's Tennis For Two and 1972's Pong), early shooter games such as Spacewar! (1962) and early racing video games such as Astro Race (1973). The first examples of multiplayer real-time games were developed on the PLATO system about 1973. Multi-user games developed on this system included 1973's Empire and 1974's Spasim; the latter was an early first-person shooter. Other early video games included turn-based multiplayer modes, popular in tabletop arcade machines. In such games, play is alternated at some point (often after the loss of a life). All players' scores are often displayed onscreen so players can see their relative standing. Danielle Bunten Berry created some of the first multiplayer video games, such as her debut, Wheeler Dealers (1978) and her most notable work, M.U.L.E. (1983). Gauntlet (1985) and Quartet (1986) introduced co-operative 4-player gaming to the arcades. The games had broader consoles to allow for four sets of controls. Ken Wasserman and Tim Stryker identified three factors which make networked computer games appealing: John G. Kemeny wrote in 1972 that software running on the Dartmouth Time-Sharing System (DTSS) had recently gained the ability to support multiple simultaneous users, and that games were the first use of the functionality. DTSS's popular American football game, he said, now supported head-to-head play by two humans. The first large-scale serial sessions using a single computer[citation needed] were STAR (based on Star Trek), OCEAN (a battle using ships, submarines and helicopters, with players divided between two combating cities) and 1975's CAVE (based on Dungeons & Dragons), created by Christopher Caldwell (with artwork and suggestions by Roger Long and assembly coding by Robert Kenney) on the University of New Hampshire's DECsystem-1090. The university's computer system had hundreds of terminals, connected (via serial lines) through cluster PDP-11s for student, teacher, and staff access. The games had a program running on each terminal (for each player), sharing a segment of shared memory (known as the "high segment" in the OS TOPS-10). The games became popular, and the university often banned them because of their RAM use. STAR was based on 1974's single-user, turn-oriented BASIC program STAR, written by Michael O'Shaughnessy at UNH. Wasserman and Stryker in 1980 described in BYTE how to network two Commodore PET computers with a cable. Their article includes a type-in, two-player Hangman, and describes the authors' more-sophisticated Flash Attack. SuperSet Software's Snipes (1981) uses networking technology that would become Novell NetWare. Digital Equipment Corporation distributed another multi-user version of Star Trek, Decwar, without real-time screen updating; it was widely distributed to universities with DECsystem-10s. In 1981 Cliff Zimmerman wrote an homage to Star Trek in MACRO-10 for DECsystem-10s and -20s using VT100-series graphics. "VTtrek" pitted four Federation players against four Klingons in a three-dimensional universe. Flight Simulator II, released in 1986 for the Atari ST and Commodore Amiga, allowed two players to connect via modem or serial cable and fly together in a shared environment. MIDI Maze, an early first-person shooter released in 1987 for the Atari ST, featured network multiplay through a MIDI interface before Ethernet and Internet play became common. It is considered[by whom?] the first multiplayer 3D shooter on a mainstream system, and the first network multiplayer action-game (with support for up to 16 players). There followed ports to a number of platforms (including Game Boy and Super NES) in 1991 under the title Faceball 2000, making it one of the first handheld, multi-platform first-person shooters and an early console example of the genre. Networked multiplayer gaming modes are known as "netplay". The first popular video-game title with a Local Area Network(LAN) version, 1991's Spectre for the Apple Macintosh, featured AppleTalk support for up to eight players. Spectre's popularity was partially attributed[by whom?] to the display of a player's name above their cybertank. There followed 1993's Doom, whose first network version allowed four simultaneous players. Play-by-email multiplayer games use email to communicate between computers. Other turn-based variations not requiring players to be online simultaneously are Play-by-post gaming and Play-by-Internet. Some online games are "massively multiplayer", with many players participating simultaneously. Two massively multiplayer genres are MMORPG (such as World of Warcraft or EverQuest) and MMORTS. First-person shooters have become popular multiplayer games; Battlefield 1942 and Counter-Strike have little (or no) single-player gameplay. Developer and gaming site OMGPOP's library included multiplayer Flash games for the casual player until it was shut down in 2013. Some networked multiplayer games, including MUDs and massively multiplayer online games (MMOs) such as RuneScape, omit a single-player mode. The largest MMO in 2008 was World of Warcraft, with over 10 million registered players worldwide. World of Warcraft would hit its peak at 12 million players two years later in 2010, and in 2023 earned the Guinness World Record for best selling MMO video game. This category of games requires multiple machines to connect via the Internet; before the Internet became popular, MUDs were played on time-sharing computer systems and games like Doom were played on a LAN. Beginning with the Sega NetLink in 1996, Game.com in 1997 and Dreamcast in 2000, game consoles support network gaming over LANs and the Internet. Many mobile phones and handheld consoles also offer wireless gaming with Bluetooth (or similar) technology. By the early 2010s online gaming had become a mainstay of console platforms such as Xbox and PlayStation.[citation needed] During the 2010s, as the number of Internet users increased, two new video game genres rapidly gained worldwide popularity – multiplayer online battle arena and battle royale game, both designed exclusively for multiplayer gameplay over the Internet. Over time the number of people playing video games has increased. In 2020, the majority of households in the United States have an occupant that plays video games, and 65% of gamers play multiplayer games with others either online or in person. Local multiplayer For some games, "multiplayer" implies that players are playing on the same gaming system or network. This applies to all arcade games, but also to a number of console, and personal computer games too. Local multiplayer games played on a singular system sometimes use split screen, so each player has an individual view of the action (important in first-person shooters and in racing video games) Nearly all multiplayer modes on beat 'em up games have a single-system option, but racing games have started to abandon split-screen in favor of a multiple-system, multiplayer mode. Turn-based games such as chess also lend themselves to single system single screen and even to a single controller. Multiple types of games allow players to use local multiplayer. The term "local co-op" or "couch co-op" refers to local multiplayer games played in a cooperative manner on the same system; these may use split-screen or some other display method. Another option is hot-seat games. Hot-seat games are typically turn-based games with only one controller or input set – such as a single keyboard/mouse on the system. Players rotate using the input device to perform their turn such that each is taking a turn on the "hot-seat". Not all local multiplayer games are played on the same console or personal computer. Some local multiplayer games are played over a LAN. This involves multiple devices using one local network to play together. Networked multiplayer games on LAN eliminate common problems faced when playing online such as lag and anonymity. Games played on a LAN network are the focus of LAN parties. While local co-op and LAN parties still take place, there has been a decrease in both due to an increasing number of players and games utilizing online multiplayer gaming. Online multiplayer Online multiplayer games connect players over a wide area network (a common example being the Internet). Unlike local multiplayer, players playing online multiplayer are not restricted to the same local network. This allows players to interact with others from a much greater distance. Playing multiplayer online offers the benefits of distance, but it also comes with its own unique challenges. Gamers refer to latency using the term "ping", after a utility which measures round-trip network communication delays (by the use of ICMP packets). A player on a DSL connection with a 50-ms ping can react faster than a modem user with a 350-ms average latency. Other problems include packet loss and choke, which can prevent a player from "registering" their actions with a server. In first-person shooters, this problem appears when bullets hit the enemy without damage. The player's connection is not the only factor; some servers are slower than others. A server that is geographically closer to the player's connection will often provide a lower ping. Data packets travel faster to a location that is closer to them. How far the device is from an internet connection (router) can also affect latency. Asymmetrical gameplay Asymmetrical multiplayer is a type of gameplay in which players can have significantly different roles or abilities from each other – enough to provide a significantly different experience of the game. In games with light asymmetry, the players share some of the same basic mechanics (such as movement and death), yet have different roles in the game; this is a common feature of the multiplayer online battle arena (MOBA) genre such as League of Legends and Dota 2, and in hero shooters such as Overwatch and Apex Legends. A first-person shooter that adopts the asymmetrical multiplayer system is Tom Clancy's Rainbow Six Siege. Giving players their own special operator changes every player's experience. This puts an emphasis on players improvising their own game plan given the abilities their character has. In games with stronger elements of asymmetry, one player/team may have one gameplay experience (or be in softly asymmetric roles) while the other player or team play in a drastically different way, with different mechanics, a different type of objective, or both. Examples of games with strong asymmetry include Dead by Daylight, Evolve, and Left 4 Dead. Asynchronous multiplayer Asynchronous multiplayer is a form of multiplayer gameplay where players are not necessarily playing at the same time. This form of multiplayer game has its origins in play-by-mail games, where players would send their moves through postal mail to a game master, who then would compile and send out results for the next turn. Play-by-mail games transitioned to electronic form as play-by-email games. Similar games were developed for bulletin board systems, such as Trade Wars, where the turn structure may not be as rigorous and allow players to take actions at any time in a persistence space alongside all other players, a concept known as sporadic play. These types of asynchronous multiplayer games waned with the widespread availability of the Internet which allowed players to play against each other simultaneously, but remains an option in many strategy-related games, such as the Civilization series. Coordination of turns are subsequently managed by one computer or a centralized server. Further, many mobile games are based on sporadic play and use social interactions with other players, lacking direct player versus player game modes but allowing players to influence other players' games, coordinated through central game servers, another facet of asynchronous play. Online cheating Online cheating (in gaming) usually refers to modifying the game experience to give one player an advantage over others, such as using an "aimbot" – a program which automatically locks the player's crosshairs onto a target – in shooting games. This is also known as "hacking" or "glitching" ("glitching" refers to using a glitch, or a mistake in the code of a game, whereas "hacking" is manipulating the code of a game). Cheating in video games is often done via a third-party program that modifies the game's code at runtime to give one or more players an advantage. In other situations, it is frequently done by changing the game's files to change the game's mechanics. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nursery_rhyme] | [TOKENS: 1940]
Contents Nursery rhyme A nursery rhyme is a traditional poem or song for children in Britain and other European countries, but usage of the term dates only from the late 18th/early 19th century. The term Mother Goose rhymes is interchangeable with nursery rhymes. From the mid-16th century nursery rhymes began to be recorded in English plays, and most popular rhymes date from the 17th and 18th centuries. The first English collections, Tommy Thumb's Song Book and a sequel, Tommy Thumb's Pretty Song Book, were published by Mary Cooper in 1744. Publisher John Newbery's stepson, Thomas Carnan, was the first to use the term Mother Goose for nursery rhymes when he published a compilation of English rhymes, Mother Goose's Melody, or Sonnets for the Cradle (London, 1780).[note 1] History The oldest children's songs for which records exist are lullabies, intended to help a child fall asleep. Lullabies can be found in every human culture. The English term lullaby is thought to come from "lu, lu" or "la la" sounds made by mothers or nurses to calm children, and "by by" or "bye bye", either another lulling sound or a term for a good night. Until the modern era, lullabies were usually recorded only incidentally in written sources. The Roman nurses' lullaby, "Lalla, Lalla, Lalla, aut dormi, aut lacta", is recorded in a scholium on Persius and may be the oldest to survive. Many medieval English verses associated with the birth of Jesus take the form of a lullaby, including "Lullay, my liking, my dere son, my sweting" and may be versions of contemporary lullabies. However, most of those used today date from the 17th century. For example, a well-known lullaby such as "Rock-a-bye Baby", could not be found in records until the late-18th century when it was printed by John Newbery (c. 1765). A French poem, similar to "Thirty days hath September", numbering the days of the month, was recorded in the 13th century. From the later Middle Ages, there are records of short children's rhyming songs, often as marginalia. From the mid-16th century, they began to be recorded in English plays. "Pat-a-cake" is one of the oldest surviving English nursery rhymes. The earliest recorded version of the rhyme appears in Thomas d'Urfey's play The Campaigners from 1698. Most nursery rhymes were not written down until the 18th century when the publishing of children's books began to move from polemic and education towards entertainment, but there is evidence for many rhymes existing before this, including "To market, to market" and "Cock a doodle doo", which date from at least the late 16th century. Nursery rhymes with 17th-century origins include, "Jack Sprat" (1639), "The Grand Old Duke of York" (1642), "Lavender's Blue" (1672) and "Rain Rain Go Away" (1687). The first English collection, Tommy Thumb's Song Book and a sequel, Tommy Thumb's Pretty Song Book, were published by Mary Cooper in London in 1744, with such songs becoming known as "Tommy Thumb's songs". A copy of the latter is held in the British Library. John Newbery's stepson, Thomas Carnan, was the first to use the term Mother Goose for nursery rhymes when he published a compilation of English rhymes, Mother Goose's Melody, or, Sonnets for the Cradle (London, 1780). These rhymes seem to have come from a variety of sources, including traditional riddles, proverbs, ballads, lines of Mummers' plays, drinking songs, historical events, and, it has been suggested, ancient pagan rituals. One example of a nursery rhyme in the form of a riddle is "As I was going to St Ives", which dates to 1730. About half of the currently recognised "traditional" English rhymes were known by the mid-18th century. More English rhymes were collected by Joseph Ritson in Gammer Gurton's Garland or The Nursery Parnassus (1784), published in London by Joseph Johnson. In the early 19th century, printed collections of rhymes began to spread to other countries, including Robert Chambers' Popular Rhymes of Scotland (1826) and in the United States, Mother Goose's Melodies (1833). From this period, the origins and authors of rhymes are sometimes known—for instance, in "Twinkle, Twinkle, Little Star" which combines the melody of an 18th-century French tune "Ah vous dirai-je, Maman" with a 19th-century English poem by Jane Taylor entitled "The Star" used as lyrics. Early folk song collectors also often collected (what is now known as) nursery rhymes, including in Scotland Sir Walter Scott and in Germany Clemens Brentano and Achim von Arnim in Des Knaben Wunderhorn (1806–1808). The first, and possibly the most important academic collection to focus in this area was James Halliwell-Phillipps' The Nursery Rhymes of England (1842) and Popular Rhymes and Tales in 1849, in which he divided rhymes into antiquities (historical), fireside stories, game-rhymes, alphabet-rhymes, riddles, nature-rhymes, places and families, proverbs, superstitions, customs, and nursery songs (lullabies). By the time of Sabine Baring-Gould's A Book of Nursery Songs (1895), folklore was an academic study full of comments and footnotes. A professional anthropologist, Andrew Lang (1844–1912) produced The Nursery Rhyme Book in 1897. The early years of the 20th century are notable for the illustrations of children's books, including Randolph Caldecott's Hey Diddle Diddle Picture Book (1909) and Arthur Rackham's Mother Goose (1913). The definitive study of English rhymes remains the work of Iona and Peter Opie. Meanings of nursery rhymes Many nursery rhymes have been argued to have hidden meanings and origins. John Bellenden Ker (1764–1842), for example, wrote four volumes arguing that English nursery rhymes were written in "Low Saxon", a hypothetical early form of Dutch. He then "translated" them back into English, revealing in particular a strong tendency to anti-clericalism. Many of the ideas about the links between rhymes and historical persons, or events, can be traced back to Katherine Elwes' book The Real Personages of Mother Goose (1930), in which she linked famous nursery rhyme characters with real people, on little or no evidence. She posited that children's songs were a peculiar form of coded historical narrative, propaganda or covert protest, and did not believe that they were written simply for entertainment. Nursery rhyme revisionism There have been several attempts across the world to revise nursery rhymes (along with fairy tales and popular songs). As recently as the late 18th century, rhymes like "Little Robin Redbreast" were occasionally cleaned up for a young audience. In the late 19th century, the major concern seems to have been violence and crime, which led some children's publishers in the United States like Jacob Abbot and Samuel Goodrich to change Mother Goose rhymes. In the early and mid-20th centuries, this was a form of bowdlerisation, concerned with some of the more violent elements of nursery rhymes and led to the formation of organisations like the British "Society for Nursery Rhyme Reform". Psychoanalysts such as Bruno Bettelheim strongly criticised this revisionism, because it weakened their usefulness to both children and adults as ways of symbolically resolving issues and it has been argued that revised versions may not perform the functions of catharsis for children, or allow them to imaginatively deal with violence and danger. In the late 20th century, revisionism of nursery rhymes became associated with the idea of political correctness. Most attempts to reform nursery rhymes on this basis appear to be either very small scale, light-hearted updating, like Felix Dennis's When Jack Sued Jill – Nursery Rhymes for Modern Times (2006), or satires written as if from the point of view of political correctness to condemn reform. The controversy in Britain in 1986 over changing the language of "Baa, Baa, Black Sheep" because it was alleged in the popular press, that it was seen as racially dubious, was based only on a rewriting of the rhyme in one private nursery, as an exercise for the children. Nursery rhymes and education It has been argued that nursery rhymes set to music aid in a child's development. In the German Kniereitvers, the child is put in mock peril, but the experience is a pleasurable one of care and support, which over time the child comes to command for itself. Research also supports the assertion that music and rhyme increase a child's ability in spatial reasoning, which aids mathematics skills. See also Notes Citations Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-99] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Habitability_of_brown_dwarf_systems] | [TOKENS: 755]
Contents Habitability of brown dwarf systems The habitability of brown dwarf planets is considered less plausible than for main-sequence stars, due to the cooling central dwarf and tidal forces due to super close-in HZ, but more plausible than for white dwarfs due to the lesser XUV and X-ray output. The planet would have to have an extremely low orbital eccentricity (ranging from Earth-like to ten-millionths depending on semimajor axis and brown dwarf mass) to avoid a tidal runaway greenhouse effect, and a planet's water and atmosphere may not survive the early stage when the planet is interior to the HZ. Details Brown dwarfs are estimated to be roughly 1/3 as frequent as M dwarfs in the solar neighborhood, with more massive ones outnumbering less massive ones, and research suggests that small rocky worlds are common around ultra-low-mass objects (ultra-cool dwarf stars as well as brown dwarfs). When tidal migration is considered, the lower limit for the probability of habitable-zone transitors around brown dwarfs within 7 pc is 4.5% (56% w/out tidal migration). Due to their short orbital periods, transiting planets orbiting brown dwarfs would be very quickly detected and confirmed, potentially even in one night if the orbital period is less than 8–10 hours. For a 0.04-solar mass dwarf, the orbital period would range from 10–55 hours for an age of 1 Gyr to ~4 hours for an age of 10 Gyrs. The transits would be very deep (1-5%) due to the small radius of the parent brown dwarf, although they would also be very short (10–40 minutes). Biosignatures would also be more easily detectable by JWST for brown dwarfs due to their small radius relative to their planets, with a maximum spectral type of ~M5V for a distance of 6.5 pc. The aging of the object would result in a shorter habitable zone duration for lower-mass objects, as they cool faster, and eventually the Roche limit becomes a problem. For a 0.04-solar mass object, the maximum HZ time would be 4 Gyrs, and for a 0.07-solar-mass object, it would be up to 10 Gyrs. An HZ duration under 0.1 Gyrs would be problematic for the development of complex life, which mostly rules out the lowest-mass objects. That said, life-bearing conditions could still continue in a subsurface (i.e., Enceladus-like) state after the HZ moves interior to the planet's orbit. For a 0.04-solar mass dwarf, the eccentricity of the orbiting planet would have to be on the order of 10−7 to prevent tidal Venus conditions at an age of 10 Gyrs. For a younger object of 1 Gyr, the eccentricity would still have to be very low (0.00005). Early-life desiccation is another issue, as according to, a habitable-mass (0.1-10 Earth-masses) planet with an initial semimajor axis of 0.009 AU would require 50 Myrs to reach the HZ of a 0.04 solar-mass brown dwarf, close to the approximate amount of time it takes to desiccate the planet. The model used suggests that a planet must have a minimum initial semimajor axis of 0.016 AU to avoid desiccation due to time spent interior to the HZ. Also, the tidal forces would push the planet's orbit outward, shortening the HZ duration. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-Bolton1972-9] | [TOKENS: 13839]
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links
========================================